Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

185 results found

Cully A, Demiris Y, 2019, Online Knowledge Level Tracking with Data-Driven Student Models and Collaborative Filtering, IEEE Transactions on Knowledge and Data Engineering, Pages: 1-1, ISSN: 1041-4347

JOURNAL ARTICLE

Zhang F, Cully A, Demiris Y, 2019, Probabilistic Real-Time User Posture Tracking for Personalized Robot-Assisted Dressing, IEEE Transactions on Robotics, Pages: 1-16, ISSN: 1552-3098

JOURNAL ARTICLE

Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo J-Y, Pattacini U, Ching Low S, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier A-L, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2018, DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 10, Pages: 1005-1022, ISSN: 2379-8920

JOURNAL ARTICLE

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2018, Learning Kinematic Structure Correspondences Using Multi-Order Similarities, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Vol: 40, Pages: 2920-2934, ISSN: 0162-8828

JOURNAL ARTICLE

Sarabia M, Young N, Canavan K, Edginton T, Demiris Y, Vizcaychipi MPet al., 2018, Assistive robotic technology to combat social isolation in acute hospital settings, International Journal of Social Robotics, Vol: 10, Pages: 607-620, ISSN: 1875-4791

Social isolation in hospitals is a well established risk factor for complications such as cognitive decline and depression. Assistive robotic technology has the potential to combat this problem, but first it is critical to investigate how hospital patients react to this technology. In order to address this question, we introduced a remotely operated NAO humanoid robot which conversed, made jokes, played music, danced and exercised with patients in a London hospital. In total, 49 patients aged between 18–100 took part in the study, 7 of whom had dementia. Our results show that a majority of patients enjoyed their interaction with NAO. We also found that age and dementia significantly affect the interaction, whereas gender does not. These results indicate that hospital patients enjoy socialising with robots, opening new avenues for future research into the potential health benefits of a social robotic companion.

JOURNAL ARTICLE

Goncalves Nunes U, Demiris Y, 2018, 3D motion segmentation of articulated rigid bodies based on RGB-D data, British Machine Vision Conference (BMVC 2018), Publisher: British Machine Vision Association (BMVA)

This paper addresses the problem of motion segmentation of articulated rigid bodiesfrom a single-view RGB-D data sequence. Current methods either perform dense motionsegmentation, and consequently are very computational demanding, or rely on sparse 2Dfeature points, which may not be sufficient to represent the entire scene. In this paper,we advocate the use of 3D semi-dense motion segmentation which also bridges somelimitations of standard 2D methods (e.g. background removal). We cast the 3D motionsegmentation problem into a subspace clustering problem, adding an adaptive spectralclustering that estimates the number of object rigid parts. The resultant method has fewparameters to adjust, takes less time than the temporal length of the scene and requiresno post-processing.

CONFERENCE PAPER

Chang HJ, Demiris Y, 2018, Highly Articulated Kinematic Structure Estimation Combining Motion and Skeleton Information, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Vol: 40, Pages: 2165-2179, ISSN: 0162-8828

JOURNAL ARTICLE

Kucukyilmaz A, Demiris Y, 2018, Learning shared control by demonstration for personalized wheelchair assistance, IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

JOURNAL ARTICLE

Fischer T, Demiris Y, 2018, A computational model for embodied visual perspective taking: from physical movements to mental simulation, Vision Meets Cognition Workshop at CVPR 2018

To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

CONFERENCE PAPER

Cully A, Demiris Y, 2018, Quality and Diversity Optimization: A Unifying Modular Framework, IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, Vol: 22, Pages: 245-259, ISSN: 1089-778X

JOURNAL ARTICLE

Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure PFMJet al., 2018, iCub-HRI: A Software Framework for Complex Human-Robot Interaction Scenarios on the iCub Humanoid Robot, FRONTIERS IN ROBOTICS AND AI, Vol: 5, ISSN: 2296-9144

JOURNAL ARTICLE

Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018, Context-aware Deep Feature Compression for High-speed Visual Tracking, 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 479-488, ISSN: 1063-6919

CONFERENCE PAPER

Nguyen PDH, Fischer T, Chang HJ, Pahacini U, Metta G, Demiris Yet al., 2018, Transferring Visuomotor Learning from Simulation to the Real World for Robotics Manipulation Tasks, 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 6667-6674, ISSN: 2153-0858

CONFERENCE PAPER

Zolotas M, Elsdon J, Demiris Y, 2018, Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance, 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1823-1829, ISSN: 2153-0858

CONFERENCE PAPER

Wang R, Amadori P, Demiris Y, 2018, Real-Time Workload Classification during Driving using HyperNetworks, 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3060-3065, ISSN: 2153-0858

CONFERENCE PAPER

Elsdon J, Demiris Y, 2018, Augmented Reality for Feedback in a Shared Control Spraying Task, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 1939-1946, ISSN: 1050-4729

CONFERENCE PAPER

Fischer T, Chang HJ, Demiris Y, 2018, RT-GENE: Real-time eye gaze estimation in natural environments, Pages: 339-357, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.

CONFERENCE PAPER

Cully A, Demiris Y, 2018, Hierarchical behavioral repertoires with unsupervised descriptors, the Genetic and Evolutionary Computation Conference, Publisher: ACM Press

CONFERENCE PAPER

Zambelli M, Demiris Y, 2017, Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 9, Pages: 113-126, ISSN: 2379-8920

JOURNAL ARTICLE

Korkinof D, Demiris Y, 2017, Multi-task and multi-kernel Gaussian process dynamical systems, PATTERN RECOGNITION, Vol: 66, Pages: 190-201, ISSN: 0031-3203

JOURNAL ARTICLE

Georgiou T, Demiris Y, 2017, Adaptive user modelling in car racing games using behavioural and physiological data, USER MODELING AND USER-ADAPTED INTERACTION, Vol: 27, Pages: 267-311, ISSN: 0924-1868

JOURNAL ARTICLE

Elsdon J, Demiris Y, 2017, Assisted painting of 3D structures using shared control with a hand-held robot, 2017 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

CONFERENCE PAPER

Choi J, Chang HJ, Yun S, Fischer T, Demiris Y, Choi JYet al., 2017, Attentional Correlation Filter Network for Adaptive Visual Tracking, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4828-4837, ISSN: 1063-6919

CONFERENCE PAPER

Zhang F, Cully A, Demiris Y, 2017, Personalized Robot-assisted Dressing using User Modeling in Latent Spaces, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3603-3610, ISSN: 2153-0858

CONFERENCE PAPER

Yoo Y, Yun S, Chang HJ, Demiris Y, Choi JYet al., 2017, Variational Autoencoded Regression: High Dimensional Regression of Visual Data on Complex Manifold, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 2943-2952, ISSN: 1063-6919

CONFERENCE PAPER

Ros R, Oleari E, Pozzi C, Sacchitelli F, Baranzini D, Bagherzadhalimi A, Sanna A, Demiris Yet al., 2016, A Motivational Approach to Support Healthy Habits in Long-term Child-Robot Interaction, International Journal of Social Robotics, Vol: 8, Pages: 599-617, ISSN: 1875-4791

JOURNAL ARTICLE

Petit M, Fischer T, Demiris Y, 2016, Towards the Emergence of Procedural Memories from Lifelong Multi-Modal Streaming Memories for Cognitive Robots, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IEEE/RSJ IROS

Various research topics are emerging as the demand for intelligent lifelong interactions between robot and humans increases. Among them, we can find the examination of persistent storage, the continuous unsupervised annotation of memories and the usage of data at high-frequency over long periods of time. We recently proposed a lifelong autobiographical memory architecture tackling some of these challenges, allowing the iCub humanoid robot to 1) create new memories for both actions that are self-executed and observed from humans, 2) continuously annotate these actions in an unsupervised manner, and 3) use reasoning modules to augment these memories a-posteriori. In this paper, we present a reasoning algorithm which generalises the robots’ understanding of actions by finding the point of commonalities with the former ones. In particular, we generated and labelled templates of pointing actions in different directions. This represents a first step towards the emergence of a procedural memory within a long-term autobiographical memory framework for robots.

CONFERENCE PAPER

Zambelli M, Fischer T, Petit M, Chang HJ, Cully A, Demiris Yet al., 2016, Towards Anchoring Self-Learned Representations to Those of Other Agents, Workshop on Bio-inspired Social Robot Learning in Home Scenarios IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: Institute of Electrical and Electronics Engineers (IEEE)

In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences, 2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discoverscorrespondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent. We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step towards the emergence of a mirror neuron like system based on self-learned representations.

CONFERENCE PAPER

Petit M, Fischer T, Demiris Y, 2016, Lifelong Augmentation of Multimodal Streaming Autobiographical Memories, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 201-213, ISSN: 2379-8920

JOURNAL ARTICLE

Gao Y, Chang HJ, Demiris Y, 2016, Personalised assistive dressing by humanoid robots using multi-modal information, Workshop on Human-Robot Interfaces for Enhanced Physical Interactions at ICRA

In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement rangemodels and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting tothe force resistances.

CONFERENCE PAPER

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&amp%3bid=00333953&amp%3brespub-action=search.html&amp%3bperson=true&amp%3bpage=7&respub-action=search.html&id=00333953&person=true