Learning Human Actions: From Perception to Robot Learning


While understanding and replicating human activities is an easy task for other humans, it is still a complicated skill to program into a robot. Understanding human actions have become an increasingly popular research topic for its wide set of applications like automated grocery stores, surveillance, sports coaching software or ambient assisted living. In robotics, being able to understand human actions not only allows us to classify an action automatically but also to learn how to replicate the same action to achieve a specific goal. Being able to teach robots how to achieve certain goals by demonstrating the movements can reduce the programming effort and allows us to teach more complex tasks. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still, it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in larger applicability (e.g. food handling). In this talk, an approach for the recognition of human social activities sequences using a combination of probabilistic temporal ensembles and a system for the teleoperation of a robot hand using a cheap setting composed of a haptic glove and a depth camera is presented.

Nov 8, 2017 9:00 AM — 5:00 PM
London, UK
Claudio Coppola
Claudio Coppola
Robotics And Machine Learning Scientist

Machine learning and robotics expert with experience in industry and academia applying AI and data science to transportation forecasting, manufacturing automation, robotic perception, and human-robot interaction.