We do research on robot manipulation driven by state-of-the-art machine learning methods. Over the last decade the research on robot learning and human-robot skill transfer has progressed significantly. Our research is focused on novel machine learning approaches that combine learning by imitation with reinforcement learning to achieve higher levels of dexterity in manipulating various objects, including rigid and soft objects, single- and dual-arm manipulation, whole-body manipulation, visuospatial skill learning, etc.

Visuospatial skill learning for robots

Visuospatial skills learning allows a robot to visually perceive objects and learn new skills based on the spatial relationships among them. The main advantage is that the robot can learn to generalise from a single demonstration.

Robot Learns to Flip Pancakes

Robot learning to flip pancakes by reinforcement learning. The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP).

Humanoid robot learns to clean a whiteboard

A humanoid robot learns to clean a whiteboard by upper-body kinesthetic teaching. This research is from a collaboration with Tokyo City University. The robot is a Fujitsu HOAP-2 humanoid.

Robot learns the skill of archery

After being instructed how to hold the bow and release the arrow, the robot learns by itself to aim and shoot arrows at the target. The learning algorithm is called ARCHER (Augmented Reward Chained Regression).

Learning Symbolic Representations of Actions from Human Demonstrations

Imitation learning enables a robot to acquire new trajectory-based skills from demonstrations. This novel machine learning approach integrates imitation learning, Visuospatial Skill Learning, and a symbolic planner.

Robot WALK-MAN at DARPA Robotics Challenge

The WALK-MAN robot is getting ready for the DARPA Robotics Challenge 2015