The long-term research goal of our lab is to advance the level of intelligence of robots. The term "robot intelligence" is very broad and includes both cognitive (mental) aspects and physical (motor) aspects of intelligence. In particular, we focus on physical motor skill learning of challenging (often dynamic) tasks for highly complex (usually humanoid) robots.
Our research work often leads us to develop novel artificial intelligence algorithms for robotic applications. Sometimes, we first need to build the physical robot before we can apply learning algorithms, especially when some part of the intelligence is embedded in the hardware itself (e.g. embodied cognition, morphological computation).
Despite its explosive growth, robotics is still a relatively young discipline. There are significant opportunities for disruptive innovations that can change the direction of the field. The Robot Intelligence Lab focuses on robotics research that seeks such innovations in the design, control, and intelligence of robotic systems. The ultimate goal would be to advance the decisional autonomy of robots, improve their dexterity in manipulating objects, and increase their agility in legged locomotion.
Over the last decade the research on robot skill transfer/learning has progressed significantly. Future research will investigate novel machine learning approaches that combine learning by imitation with reinforcement learning, to achieve higher levels of dexterity in manipulating various objects, including rigid and soft objects, liquids, ropes, etc.
Two strands of research in locomotion are pursued: bio-inspired and function-centric. The bio-inspired research focuses on emulating animal-like bipedal and quadruped locomotion, including walking, trotting, jumping and running. The second one, function-centric, investigates innovative design and control approaches for locomotion that are not necessarily trying to copy any biological counterpart, such as knee-less legged robots, parallel robots, etc.
The research focus is on developing state-of-the-art machine learning algorithms (including Deep learning, Reinforcement learning, and Unsupervised learning) for intelligent robot behaviour. We aim to advance the decisional autonomy of robots, improve their perception, their understanding of the world, and their prediction capabilities. The ultimate goal is to create algorithms that allow robots to autonomously learn from their environment in an open-ended and continuous way (i.e. lifelong learning).
Robot Design & Control
The design and control of a robot are tightly coupled. The way we design a robot determines the way it can be controlled, and vice versa. Machine learning can be applied not only to optimize the robot motion controllers, but also to evolve robot designs with a particular design objective in mind.