Videos from the lab

Context-aware Deep Feature Compression for Visual Tracking

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.
 

Conference: IEEE Conference on Computer Vision and Pattern Recognition (CVPR2018)

Authors: J. Choi, H. J. Chang, T. Fischer, S. Yun, K.Lee, J. Jeong, Y. Demiris, and J. Y. Choi

Context-aware Deep Feature Compression for Visual Tracking

Context-aware Deep Feature Compression for Visual Tracking

Supplementary video for the Choi et al. CVPR2018 paper

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.
 

Conference: IEEE Conference on Computer Vision and Pattern Recognition (CVPR2018)

Authors: J. Choi, H. J. Chang, T. Fischer, S. Yun, K.Lee, J. Jeong, Y. Demiris, and J. Y. Choi

User Modelling Using Multimodal Information for Dressing

User Modelling Using Multimodal Information for Dressing

Supplementary video for the Gao, Chang, and Demiris

 Human-Robot Interaction with DAC-H3 cognitive architecture

Human-Robot Interaction with DAC-H3 cognitive architecture

Supplementary video for the Moulin-Frier, Fischer et al. TCDS2017 paper

The robot is self-regulating two drives for knowledge acquisition and expression. Acquired information is about labeling the perceived objects, agents and body parts, as well as associating body part touch and motor information. In addition, a number of goal-oriented behaviors are executed through human requests: passing objects, showing the learned kinematic structure, recognizing actions, pointing to the human body parts. A complex narrative dialog about the robot's past experiences is also demonstrated at the end of the video.

Journal: IEEE Transactions on Cognitive and Developmental Systems, 2017

Authors: C. Moulin-Frier*, T. Fischer*, M. Petit, G. Pointeau, J.-Y. Puigbo, U. Pattacini, S. C. Low, D. Camilleri, P. Nguyen, M. Hoffmann, H. J. Chang, M. Zambelli, A.-L. Mealier, A. Damianou, G. Metta, T. J. Prescott, Y. Demiris, P. F. Dominey, and P. F. M. J. Verschure (*: equal contributions)

URL: http://hdl.handle.net/10044/1/50801

Personalized Dressing using User Modeling in Latent Spaces

Personalized Dressing using User Modeling in Latent Spaces

Supplementary video for the Zhang, Cully, and Demiris IROS2017 paper

Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.

Conference: IROS2017
Authors: Fan Zhang, Antoine Cully, Yiannis Demiris

Attentional Network for Adaptive Visual Tracking

Attentional Network for Adaptive Visual Tracking

Supplementary video for the Choi et al. CVPR2017 paper

Title: Attentional Correlation Filter Network for Adaptive Visual Tracking

We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.
 

Conference: CVPR2017
Authors: Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, and Jin Young Choi

Adaptive User Model in Car Racing Games

Adaptive User Model in Car Racing Games

This video shows our framework for Adaptive User Modelling in Car Racing Games. It shows the sequent

This video shows our framework for Adaptive User Modelling in Car Racing Games. It shows the sequential steps of the model, the simulator as well as the steps carried out to implement the User Model.

Assisted Painting of 3D Structures Using Shared Control

Assisted Painting of 3D Structures Using Shared Control

Assisted Painting of 3D Structures Using Shared Control with Under-actuated Robots

"Assisted Painting of 3D Structures Using Shared Control with Under-actuated Robots", ICRA 2017.

Authors: J. Elsdon and Y. Demiris.

Personalised Track Design in Car Racing Games

Personalised Track Design in Car Racing Games

Video shows a short demo of the track changing algorithm that creates a personalised track according

Real-time adaptation of computer games’ content to the users’ skills and abilities can enhance the player’s engagement and immersion. Understanding of the user’s potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user’s gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system. 

Conference: CIG 2016
Title: Personalised Track Design in Car Racing Games
Authors: Theodosis Georgiou and Yiannis Demiris

Supporting Article : https://spiral.imperial.ac.uk/handle/10044/1/39560

Multimodal Imitation Using Self-Learned Sensorimotor Repr.

Multimodal Imitation Using Self-Learned Sensorimotor Repr.

Supplementary video for the Zambelli and Demiris IROS2016 paper

Although many tasks intrinsically involve multiple modalities, often only data from a single modality are used to improve complex robots acquisition of new skills. We present a method to equip robots with multimodal learning skills to achieve multimodal imitation on-the-fly on multiple concurrent task spaces, including vision, touch and proprioception, only using self-learned multimodal sensorimotor relations, without the need of solving inverse kinematic problems or explicit analytical models formulation. We evaluate the proposed method on a humanoid iCub robot learning to interact with a piano keyboard and imitating a human demonstration. Since no assumptions are made on the kinematic structure of the robot, the method can be also applied to different robotic platforms.

Conference: IROS2016
Authors: Martina Zambelli and Yiannis Demiris

Iterative Path Optimisation for Dressing Assistance

Iterative Path Optimisation for Dressing Assistance

Supplementary video for the Gao, Chang, and Demiris IROS2016 paper

We  propose  an  online  iterative  path  optimisation method  to  enable  a  Baxter  humanoid  robot  to  assist  human users to dress. The robot searches for the optimal personalised dressing path using vision and force sensor information: vision information is used to recognise the human pose and model the movement space of upper-body joints; force sensor information is  used  for  the  robot  to  detect  external  force  resistance  and to locally adjust its motion. We propose a new stochastic path optimisation method based on adaptive moment estimation. We first compare the proposed method with other path optimisation algorithms  on  synthetic  data.  Experimental  results  show  that the performance of the method achieves the smallest error with fewer  iterations  and  less  computation  time.  We  also  evaluate real-world  data  by  enabling  the  Baxter  robot  to  assist  real human  users  with  their  dressing.

Conference: IROS2016
Authors: Yixing Gao, Hyung Jin Chang, Yiannis Demiris

Kinematic Structure Correspondences via Hypergraph Matching

Kinematic Structure Correspondences via Hypergraph Matching

Supplementary video for the Chang, Fischer, Petit, Zambelli and Demiris CVPR2016 paper

In this paper, we present a novel framework for finding the kinematic structure correspondence between two objects in videos via hypergraph matching. In contrast to prior appearance and graph alignment based matching methods which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos.
Our main contributions can be summarised as follows:
(i) casting the kinematic structure correspondence problem into a hypergraph matching problem, incorporating multi-order similarities with normalising weights
(ii) structural topology similarity measure by a new topology constrained subgraph isomorphism aggregation
(iii) kinematic correlation measure between pairwise nodes
(iv) combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold.
We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other methods are outperformed.

Conference: CVPR2016
Authors: Hyung Jin Chang, Tobias Fischer, Maxime Petit, Martina Zambelli, Yiannis Demiris

Visual Tracking Using Attention-Modulated Disintegration and

Visual Tracking Using Attention-Modulated Disintegration and

Visual Tracking Using Attention-Modulated Disintegration and Integration

"Visual Tracking Using Attention-Modulated Disintegration and Integration", accepted for CVPR2016.

Authors: J. Choi, H. J. Chang, J. Jeong, Y. Demiris, and J. Y. Choi

Markerless Perspective Taking for Humanoid Robots

Markerless Perspective Taking for Humanoid Robots

Supplementary video for the Fischer and Demiris ICRA2016 paper

Perspective taking enables humans to imagine the world from another viewpoint. This allows reasoning about the state of other agents, which in turn is used to more accurately predict their behavior. In this paper, we equip an iCub humanoid robot with the ability to perform visuospatial perspective taking (PT) using a single depth camera mounted above the robot. Our approach has the distinct benefit that the robot can be used in unconstrained environments, as opposed to previous works which employ marker-based motion capture systems. Prior to and during the PT, the iCub learns the environment, recognizes objects within the environment, and estimates the gaze of surrounding humans. We propose a new head pose estimation algorithm which shows a performance boost by normalizing the depth data to be aligned with the human head. Inspired by psychological studies, we employ two separate mechanisms for the two different types of PT. We implement line of sight tracing to determine whether an object is visible to the humans (level 1 PT). For more complex PT tasks (level 2 PT), the acquired point cloud is mentally rotated, which allows algorithms to reason as if the input data was acquired from an egocentric perspective. We show that this can be used to better judge where object are in relation to the humans. The multifaceted improvements to the PT pipeline advance the state of the art, and move PT in robots to markerless, unconstrained environments.

Hierarchical Action Learning by Instruction

Hierarchical Action Learning by Instruction

Supplementary video for the Petit and Demiris ICRA2016 paper

This video accompanies the paper titled "Hierarchical Action Learning by Instruction Through Interactive Grounding of Body Parts and Proto-actions" presented at IEEE International Conference on Robotics and Automation 2016.

One-shot Learning of Assistance by Demonstration

One-shot Learning of Assistance by Demonstration

Supplementary video for our ROMAN 2015 paper

Supplementary video for Kucukyilmaz A, Demiris Y, 2015, "One-shot assistance estimation from expert demonstrations for a shared control wheelchair system", International Symposium on Robot and Human Interactive Communication (RO-MAN). More information can be found in the paper.
Personalised Dressing Assistance by Humanoid Robots

Personalised Dressing Assistance by Humanoid Robots

Supplementary video for our IROS 2015 paper

Supplementary video for Gao Y, Chang HJ, Demiris Y, 2015, "User Modelling for Personalised Dressing Assistance by Humanoid Robots", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). More information can be found in the paper.
Lifelong Augmentation of Multi Modal Streaming Memories

Lifelong Augmentation of Multi Modal Streaming Memories

We provide a principled framework for the cumulative organisation of streaming autobiographical data

Many robotics algorithms can benefit from storing and recalling large amounts of accumulated sensorimotor and interaction data. We provide a principled framework for the cumulative organisation of streaming autobiographical data so that data can be continuously processed and augmented as the processing and reasoning abilities of the agent develops and further interactions with humans take place. As an example, we show how a kinematic structure learning algorithm reasons a-posteriori about the skeleton of a human hand. A partner can be asked to provide feedback about the augmented memories, which can in turn be supplied to the reasoning processes in order to adapt their parameters. We employ active, multi- modal remembering, so the robot as well as humans can gain insights of both the original and augmented memories. Our framework is capable of storing discrete and continuous data in real-time, and thus creates a full memory. The data can cover multiple modalities and several layers of abstraction (e.g. from raw sound signals over sentences to extracted meanings). We show a typical interaction with a human partner using an iCub humanoid robot. The framework is implemented in a platform-independent manner. In particular, we validate multi platform capabilities using the iCub, Baxter and NAO robots. We also provide an interface to cloud based services, which allow automatic annotation of episodes. Our framework is geared towards the developmental robotics community, as it 1) provides a variety of interfaces for other modules, 2) unifies previous works on autobiographical memory, and 3) is licensed as open source software.

Unsupervised Complex Kinematic Structure Learning

Unsupervised Complex Kinematic Structure Learning

Supplementary video of our CVPR 2015 paper

Supplementary video of Chang HJ, Demiris Y, 2015, "Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information", IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Find more information in the paper.

Online Heterogeneous Ensemble Learning

Online Heterogeneous Ensemble Learning

Online Heterogeneous Ensemble Learning of Sensorimotor Contingencies from Motor Babbling

Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter.

Musical Human-Robot Collaboration with Baxter

Musical Human-Robot Collaboration with Baxter

This video shows our framework for adaptive musical human-robot collaboration

This video shows our framework for adaptive musical human-robot collaboration. Baxter is in charge of the drum accompaniment and is learning the preferences of the user, who is in charge of the melody. For more information read:Sarabia M, Lee K, Demiris Y, 2015, "Towards a Synchronised Grammars Framework for Adaptive Musical Human-Robot Collaboration", IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Publisher: IEEE, Pages: 715-721.

Assistive Robotic Technology for Hospital Patients

Assistive Robotic Technology for Hospital Patients

Junior spent a week keeping company many patients at the Chelsea & Westminster hospital

A NAO humanoid robot, Junior, spent a week keeping company many patients at the Chelsea & Westminster Hospital in one of the largest trials of its kind in the world. Our results show that patients really enjoyed interacting with the robot.

The Online Echo State Gaussian Process (OESGP)

The Online Echo State Gaussian Process (OESGP)

A video demonstrating the Online Echo State Gaussian Process (OESGP) for temporal learning

A video demonstrating the Online Echo State Gaussian Process (OESGP) for temporal learning and prediction. Find out more at: http://haroldsoh.com/otl-library/
Code available at: https://bitbucket.org/haroldsoh/otl/

ARTY Nao Sidekick Imperial Festival

ARTY Nao Sidekick Imperial Festival

ARTY Nao Sidekick Imperial Festival

Here the ARTY wheelchair integrated with NAO is presented at the annual Imperial Festival, where children used the system.

ARTY NAO Experiment

ARTY NAO Experiment

A Humanoid Robot Companion for Wheelchair Users

This video shows the ARTY wheelchair integrated with a humanoid robot (NAO). The humanoid companion acts as a driving aid by pointing out obstacles and giving directions to the wheelchair user. More information at: Sarabia M, Demiris Y, 2013, "A Humanoid Robot Companion for Wheelchair Users", International Conference on Social Robotics (ICSR), Publisher: Springer, Pages: 432-441

HAMMER on iCub: Towards Contextual Action Recognition

HAMMER on iCub: Towards Contextual Action Recognition

"Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention"

Dimitri Ognibene, Eris Chinellato, Miguel Sarabia and Yiannis Demiris, "Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention", Conference on Biomimetic and Biohybrid Systems, 2012
iCub Learning and Playing the Towers of Hanoi Puzzle

iCub Learning and Playing the Towers of Hanoi Puzzle

"Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties"

Kyuhwa Lee, Tae-Kyun Kim and Yiannis Demiris, "Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties", IEEE International Conference on Robotics and Automation (ICRA), St. Paul, USA, 2012.

iCub Learning Human Dance Structures for Imitation

iCub Learning Human Dance Structures for Imitation

The iCub shows off its dance moves

Kyuhwa Lee, Tae-Kyun Kim and Yiannis Demiris, "Learning Reusable Task Representations using Hierarchical Activity Grammars with Uncertainties", IEEE International Conference on Robotics and Automation (ICRA), St. Paul, USA, 2012

iCub Grasping Demonstration

iCub Grasping Demonstration

A demonstration of the iCub grasping mechanism

Yanyu Su, Yan Wu, Kyuhwa Lee, Zhijiang Du, Yiannis Demiris, "Robust Grasping Mechanism for an Under-actuated Anthropomorphic Hand under Object Position Uncertainty", IEEE-RAS International Conference on Humanoid Robots, Osaka, Japan, 2012.

iCub playing the Theremin

iCub playing the Theremin

The iCub humanoid robot plays one of the most difficult musical instruments

The iCub humanoid robot plays the Theremin, one of the most difficult musical instrument, in real-time.

ARTY Smart Wheelchair

ARTY Smart Wheelchair

Helping young children safely use a wheelchair

The Assistive Robotic Transport for Youngsters (ARTY) is a smart wheelchair designed to help young children with disabilities who are unable to safely use a regular powered wheelchair. It is our hope that ARTY will give users an opportunity to independently explore, learn and play.