Videos from the lab
Head-Mounted Augmented Reality for Wheelchairs
Supplementary video for the Zolotas et al. IROS2018 paper
Abstract: Robotic wheelchairs with built-in assistive features, such as shared control, are an emerging means of providing independent mobility to severely disabled individuals. However, patients often struggle to build a mental model of their wheelchair's behaviour under different environmental conditions. Motivated by the desire to help users bridge this gap in perception, we propose a novel augmented reality system using a Microsoft Hololens as a head-mounted aid for wheelchair navigation. The system displays visual feedback to the wearer as a way of explaining the underlying dynamics of the wheelchair's shared controller and its predicted future states. To investigate the influence of different interface design options, a pilot study was also conducted. We evaluated the acceptance rate and learning curve of an immersive wheelchair training regime, revealing preliminary insights into the potential beneficial and adverse nature of different augmented reality cues for assistive navigation. In particular, we demonstrate that care should be taken in the presentation of information, with effort-reducing cues for augmented information acquisition (for example, a rear-view display) being the most appreciated.
Conference: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)
Authors: M. Zolotas, J. Elsdon and Y. Demiris
RT-GENE: Real-Time Gaze Estimation in Natural Environments
Supplementary video for the Fischer, Chang and Demiris ECCV2018 paper
In this work, we consider the problem of robust gaze estima- tion in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environ- ments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are con- ducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.
Conference: European Conference on Computer Vision (ECCV2018)
Authors: T. Fischer, H. J. Chang, and Y. Demiris
Transferring Visuomotor Learning from Simulation
Supplementary video for the Nguyen et al. IROS2018 paper
Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm’s joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub’s hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot’s joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.
Conference: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)
Authors: P. Nguyen, T. Fischer, H. J. Chang, U. Pattacini, G. Metta, and Y. Demiris
Context-aware Deep Feature Compression for Visual Tracking
Supplementary video for the Choi et al. CVPR2018 paper
Conference: IEEE Conference on Computer Vision and Pattern Recognition (CVPR2018)
Authors: J. Choi, H. J. Chang, T. Fischer, S. Yun, K.Lee, J. Jeong, Y. Demiris, and J. Y. Choi
User Modelling Using Multimodal Information for Dressing
Supplementary video for the Gao, Chang, and Demiris
Human-Robot Interaction with DAC-H3 cognitive architecture
Supplementary video for the Moulin-Frier, Fischer et al. TCDS2017 paper
The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents and body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: passing objects, showing the learned kinematic structure, recognizing actions, pointing to the human body parts. A complex narrative dialog about the robot's past experiences is also demonstrated at the end of the video.
Journal: IEEE Transactions on Cognitive and Developmental Systems, 2017
Authors: C. Moulin-Frier*, T. Fischer*, M. Petit, G. Pointeau, J.-Y. Puigbo, U. Pattacini, S. C. Low, D. Camilleri, P. Nguyen, M. Hoffmann, H. J. Chang, M. Zambelli, A.-L. Mealier, A. Damianou, G. Metta, T. J. Prescott, Y. Demiris, P. F. Dominey, and P. F. M. J. Verschure (*: equal contributions)
Personalized Dressing using User Modeling in Latent Spaces
Supplementary video for the Zhang, Cully, and Demiris IROS2017 paper
Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.
Authors: Fan Zhang, Antoine Cully, Yiannis Demiris
Attentional Network for Adaptive Visual Tracking
Supplementary video for the Choi et al. CVPR2017 paper
Title: Attentional Correlation Filter Network for Adaptive Visual Tracking
Authors: Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, and Jin Young Choi
Adaptive User Model in Car Racing Games
This video shows our framework for Adaptive User Modelling in Car Racing Games. It shows the sequent
This video shows our framework for Adaptive User Modelling in Car Racing Games. It shows the sequential steps of the model, the simulator as well as the steps carried out to implement the User Model.
Assisted Painting of 3D Structures Using Shared Control
Assisted Painting of 3D Structures Using Shared Control with Under-actuated Robots
"Assisted Painting of 3D Structures Using Shared Control with Under-actuated Robots", ICRA 2017.
Authors: J. Elsdon and Y. Demiris.
Personalised Track Design in Car Racing Games
Video shows a short demo of the track changing algorithm that creates a personalised track according
Real-time adaptation of computer games’ content to the users’ skills and abilities can enhance the player’s engagement and immersion. Understanding of the user’s potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user’s gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.
Conference: CIG 2016
Title: Personalised Track Design in Car Racing Games
Authors: Theodosis Georgiou and Yiannis Demiris
Supporting Article : https://spiral.imperial.ac.uk/handle/10044/1/39560
Multimodal Imitation Using Self-Learned Sensorimotor Repr.
Supplementary video for the Zambelli and Demiris IROS2016 paper
Although many tasks intrinsically involve multiple modalities, often only data from a single modality are used to improve complex robots acquisition of new skills. We present a method to equip robots with multimodal learning skills to achieve multimodal imitation on-the-fly on multiple concurrent task spaces, including vision, touch and proprioception, only using self-learned multimodal sensorimotor relations, without the need of solving inverse kinematic problems or explicit analytical models formulation. We evaluate the proposed method on a humanoid iCub robot learning to interact with a piano keyboard and imitating a human demonstration. Since no assumptions are made on the kinematic structure of the robot, the method can be also applied to different robotic platforms.
Authors: Martina Zambelli and Yiannis Demiris
Iterative Path Optimisation for Dressing Assistance
Supplementary video for the Gao, Chang, and Demiris IROS2016 paper
We propose an online iterative path optimisation method to enable a Baxter humanoid robot to assist human users to dress. The robot searches for the optimal personalised dressing path using vision and force sensor information: vision information is used to recognise the human pose and model the movement space of upper-body joints; force sensor information is used for the robot to detect external force resistance and to locally adjust its motion. We propose a new stochastic path optimisation method based on adaptive moment estimation. We first compare the proposed method with other path optimisation algorithms on synthetic data. Experimental results show that the performance of the method achieves the smallest error with fewer iterations and less computation time. We also evaluate real-world data by enabling the Baxter robot to assist real human users with their dressing.
Authors: Yixing Gao, Hyung Jin Chang, Yiannis Demiris
Kinematic Structure Correspondences via Hypergraph Matching
Supplementary video for the Chang, Fischer, Petit, Zambelli and Demiris CVPR2016 paper
In this paper, we present a novel framework for finding the kinematic structure correspondence between two objects in videos via hypergraph matching. In contrast to prior appearance and graph alignment based matching methods which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos.
Our main contributions can be summarised as follows:
(i) casting the kinematic structure correspondence problem into a hypergraph matching problem, incorporating multi-order similarities with normalising weights
(ii) structural topology similarity measure by a new topology constrained subgraph isomorphism aggregation
(iii) kinematic correlation measure between pairwise nodes
(iv) combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold.
We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other methods are outperformed.
Authors: Hyung Jin Chang, Tobias Fischer, Maxime Petit, Martina Zambelli, Yiannis Demiris
Visual Tracking Using Attention-Modulated Disintegration and
Supplementary video for the Choi et al. CVPR2016 paper
Authors: J. Choi, H. J. Chang, J. Jeong, Y. Demiris, and J. Y. Choi
Markerless Perspective Taking for Humanoid Robots
Supplementary video for the Fischer and Demiris ICRA2016 paper
Perspective taking enables humans to imagine the world from another viewpoint. This allows reasoning about the state of other agents, which in turn is used to more accurately predict their behavior. In this paper, we equip an iCub humanoid robot with the ability to perform visuospatial perspective taking (PT) using a single depth camera mounted above the robot. Our approach has the distinct benefit that the robot can be used in unconstrained environments, as opposed to previous works which employ marker-based motion capture systems. Prior to and during the PT, the iCub learns the environment, recognizes objects within the environment, and estimates the gaze of surrounding humans. We propose a new head pose estimation algorithm which shows a performance boost by normalizing the depth data to be aligned with the human head. Inspired by psychological studies, we employ two separate mechanisms for the two different types of PT. We implement line of sight tracing to determine whether an object is visible to the humans (level 1 PT). For more complex PT tasks (level 2 PT), the acquired point cloud is mentally rotated, which allows algorithms to reason as if the input data was acquired from an egocentric perspective. We show that this can be used to better judge where object are in relation to the humans. The multifaceted improvements to the PT pipeline advance the state of the art, and move PT in robots to markerless, unconstrained environments.
Hierarchical Action Learning by Instruction
Supplementary video for the Petit and Demiris ICRA2016 paper
This video accompanies the paper titled "Hierarchical Action Learning by Instruction Through Interactive Grounding of Body Parts and Proto-actions" presented at IEEE International Conference on Robotics and Automation 2016.
One-shot Learning of Assistance by Demonstration
Supplementary video for our ROMAN 2015 paper
Personalised Dressing Assistance by Humanoid Robots
Supplementary video for our IROS 2015 paper
Lifelong Augmentation of Multi Modal Streaming Memories
Supplementary video for the Petit, Fischer and Demiris TCDS2016 paper
Many robotics algorithms can benefit from storing and recalling large amounts of accumulated sensorimotor and interaction data. We provide a principled framework for the cumulative organisation of streaming autobiographical data so that data can be continuously processed and augmented as the processing and reasoning abilities of the agent develops and further interactions with humans take place. As an example, we show how a kinematic structure learning algorithm reasons a-posteriori about the skeleton of a human hand. A partner can be asked to provide feedback about the augmented memories, which can in turn be supplied to the reasoning processes in order to adapt their parameters. We employ active, multi- modal remembering, so the robot as well as humans can gain insights of both the original and augmented memories. Our framework is capable of storing discrete and continuous data in real-time, and thus creates a full memory. The data can cover multiple modalities and several layers of abstraction (e.g. from raw sound signals over sentences to extracted meanings). We show a typical interaction with a human partner using an iCub humanoid robot. The framework is implemented in a platform-independent manner. In particular, we validate multi platform capabilities using the iCub, Baxter and NAO robots. We also provide an interface to cloud based services, which allow automatic annotation of episodes. Our framework is geared towards the developmental robotics community, as it 1) provides a variety of interfaces for other modules, 2) unifies previous works on autobiographical memory, and 3) is licensed as open source software.
Journal: Transactions on Cognitive and Developmental Systems
Authors: M. Petit*, T. Fischer* and Y. Demiris (*: equal contributions)
Unsupervised Complex Kinematic Structure Learning
Supplementary video of our CVPR 2015 paper
Supplementary video of Chang HJ, Demiris Y, 2015, "Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information", IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Find more information in the paper.
Online Heterogeneous Ensemble Learning
Online Heterogeneous Ensemble Learning of Sensorimotor Contingencies from Motor Babbling
Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter.
Musical Human-Robot Collaboration with Baxter
This video shows our framework for adaptive musical human-robot collaboration
This video shows our framework for adaptive musical human-robot collaboration. Baxter is in charge of the drum accompaniment and is learning the preferences of the user, who is in charge of the melody. For more information read:Sarabia M, Lee K, Demiris Y, 2015, "Towards a Synchronised Grammars Framework for Adaptive Musical Human-Robot Collaboration", IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Publisher: IEEE, Pages: 715-721.
Assistive Robotic Technology for Hospital Patients
Junior spent a week keeping company many patients at the Chelsea & Westminster hospital
A NAO humanoid robot, Junior, spent a week keeping company many patients at the Chelsea & Westminster Hospital in one of the largest trials of its kind in the world. Our results show that patients really enjoyed interacting with the robot.
The Online Echo State Gaussian Process (OESGP)
A video demonstrating the Online Echo State Gaussian Process (OESGP) for temporal learning
ARTY Nao Sidekick Imperial Festival
ARTY Nao Sidekick Imperial Festival
Here the ARTY wheelchair integrated with NAO is presented at the annual Imperial Festival, where children used the system.
ARTY NAO Experiment
A Humanoid Robot Companion for Wheelchair Users
This video shows the ARTY wheelchair integrated with a humanoid robot (NAO). The humanoid companion acts as a driving aid by pointing out obstacles and giving directions to the wheelchair user. More information at: Sarabia M, Demiris Y, 2013, "A Humanoid Robot Companion for Wheelchair Users", International Conference on Social Robotics (ICSR), Publisher: Springer, Pages: 432-441
HAMMER on iCub: Towards Contextual Action Recognition
"Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention"
iCub Learning and Playing the Towers of Hanoi Puzzle
"Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties"
Kyuhwa Lee, Tae-Kyun Kim and Yiannis Demiris, "Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties", IEEE International Conference on Robotics and Automation (ICRA), St. Paul, USA, 2012.
iCub Learning Human Dance Structures for Imitation
The iCub shows off its dance moves
Kyuhwa Lee, Tae-Kyun Kim and Yiannis Demiris, "Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties", IEEE International Conference on Robotics and Automation (ICRA), St. Paul, USA, 2012
iCub Grasping Demonstration
A demonstration of the iCub grasping mechanism
Yanyu Su, Yan Wu, Kyuhwa Lee, Zhijiang Du, Yiannis Demiris, "Robust Grasping Mechanism for an Under-actuated Anthropomorphic Hand under Object Position Uncertainty", IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan, 2012.
iCub playing the Theremin
The iCub humanoid robot plays one of the most difficult musical instruments
The iCub humanoid robot plays the Theremin, one of the most difficult musical instrument, in real-time.
ARTY Smart Wheelchair
Helping young children safely use a wheelchair
The Assistive Robotic Transport for Youngsters (ARTY) is a smart wheelchair designed to help young children with disabilities who are unable to safely use a regular powered wheelchair. It is our hope that ARTY will give users an opportunity to independently explore, learn and play.