Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

176 results found

Chang HJ, Demiris Y, 2018, Highly Articulated Kinematic Structure Estimation Combining Motion and Skeleton Information., IEEE Trans Pattern Anal Mach Intell, Vol: 40, Pages: 2165-2179

In this paper, we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view 2D image sequence. In contrast to prior motion-based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology via a successive iterative merging strategy. The iterative merge process is guided by a density weighted skeleton map which is generated from a novel object boundary generation method from sparse 2D feature points. Our main contributions can be summarised as follows: (i) An unsupervised complex articulated kinematic structure estimation method that combines motion segments with skeleton information. (ii) An iterative fine-to-coarse merging strategy for adaptive motion segmentation and structural topology embedding. (iii) A skeleton estimation method based on a novel silhouette boundary generation from sparse feature points using an adaptive model selection method. (iv) A new highly articulated object dataset with ground truth annotation. We have verified the effectiveness of our proposed method in terms of computational time and estimation accuracy through rigorous experiments with multiple datasets. Our experiments show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

JOURNAL ARTICLE

Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018, Context-aware Deep Feature Compression for High-speed Visual Tracking

We propose a new context-aware correlation filter based tracking framework toachieve both high computational speed and state-of-the-art performance amongreal-time trackers. The major contribution to the high computational speed liesin the proposed deep feature compression that is achieved by a context-awarescheme utilizing multiple expert auto-encoders; a context in our frameworkrefers to the coarse category of the tracking target according to appearancepatterns. In the pre-training phase, one expert auto-encoder is trained percategory. In the tracking phase, the best expert auto-encoder is selected for agiven target, and only this auto-encoder is used. To achieve high trackingperformance with the compressed feature map, we introduce extrinsic denoisingprocesses and a new orthogonality loss term for pre-training and fine-tuning ofthe expert auto-encoders. We validate the proposed context-aware frameworkthrough a number of experiments, where our method achieves a comparableperformance to state-of-the-art trackers which cannot run in real-time, whilerunning at a significantly fast speed of over 100 fps.

CONFERENCE PAPER

Cully A, Demiris Y, 2018, Quality and Diversity Optimization: A Unifying Modular Framework, IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, Vol: 22, Pages: 245-259, ISSN: 1089-778X

JOURNAL ARTICLE

Fischer T, Demiris Y, 2018, A computational model for embodied visual perspective taking: from physical movements to mental simulation, Vision Meets Cognition Workshop at CVPR 2018

To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

CONFERENCE PAPER

Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure PFMJet al., 2018, iCub-HRI: A Software Framework for Complex Human-Robot Interaction Scenarios on the iCub Humanoid Robot, FRONTIERS IN ROBOTICS AND AI, Vol: 5, ISSN: 2296-9144

JOURNAL ARTICLE

Nguyen P, Fischer T, Chang HJ, Pattacini U, Metta G, Demiris Yet al., 2018, Transferring Visuomotor Learning from Simulation to the Real World for Robotics Manipulation Tasks, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE

Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub's hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot's joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

CONFERENCE PAPER

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2017, Learning Kinematic Structure Correspondences Using Multi-Order Similarities, IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN: 0162-8828

IEEE We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.

JOURNAL ARTICLE

Choi J, Chang HJ, Yun S, Fischer T, Demiris Y, Choi JYet al., 2017, Attentional Correlation Filter Network for Adaptive Visual Tracking, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4828-4837, ISSN: 1063-6919

CONFERENCE PAPER

Elsdon J, Demiris Y, 2017, Assisted painting of 3D structures using shared control with a hand-held robot, IEEE International Conference on Robotics and Automation, Publisher: IEEE

Abstract— We present a shared control method of painting3D geometries, using a handheld robot which has a singleautonomously controlled degree of freedom. The user scansthe robot near to the desired painting location, the singlemovement axis moves the spray head to achieve the requiredpaint distribution. A simultaneous simulation of the sprayingprocedure is performed, giving an open loop approximationof the current state of the painting. An online prediction ofthe best path for the spray nozzle actuation is calculated ina receding horizon fashion. This is calculated by producing amap of the paint required in the 2D space defined by nozzleposition on the gantry and the time into the future. A directedgraph then extracts its edge weights from this paint density mapand Dijkstra’s algorithm is then used to find the candidate forthe most effective path. Due to the heavy parallelisation of thisapproach and the majority of the calculations taking place on aGPU we can run the prediction loop in 32.6ms for a predictionhorizon of 1 second, this approach is computationally efficient,outperforming a greedy algorithm. The path chosen by theproposed method on average chooses a path in the top 15%of all paths as calculated by exhaustive testing. This approachenables development of real time path planning for assistedspray painting onto complicated 3D geometries. This methodcould be applied to applications such as assistive painting forpeople with disabilities, or accurate placement of liquid whenlarge scale positioning of the head is too expensive.

CONFERENCE PAPER

Georgiou T, Demiris Y, 2017, Adaptive user modelling in car racing games using behavioural and physiological data, USER MODELING AND USER-ADAPTED INTERACTION, Vol: 27, Pages: 267-311, ISSN: 0924-1868

JOURNAL ARTICLE

Korkinof D, Demiris Y, 2017, Multi-task and multi-kernel Gaussian process dynamical systems, PATTERN RECOGNITION, Vol: 66, Pages: 190-201, ISSN: 0031-3203

JOURNAL ARTICLE

Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo JY, Pattacini U, Low SC, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier AL, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2017, DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920

IEEE This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

JOURNAL ARTICLE

Yoo Y, Yun S, Chang HJ, Demiris Y, Choi JYet al., 2017, Variational Autoencoded Regression: High Dimensional Regression of Visual Data on Complex Manifold, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 2943-2952, ISSN: 1063-6919

CONFERENCE PAPER

Zambelli M, Demiris Y, 2017, Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 9, Pages: 113-126, ISSN: 2379-8920

JOURNAL ARTICLE

Zhang F, Cully A, Demiris Y, 2017, Personalized Robot-assisted Dressing using User Modeling in Latent Spaces, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3603-3610, ISSN: 2153-0858

CONFERENCE PAPER

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2016, Kinematic Structure Correspondences via Hypergraph Matching, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4216-4225, ISSN: 1063-6919

CONFERENCE PAPER

Choi J, Chang HJ, Jeong J, Demiris Y, Choi JYet al., 2016, Visual Tracking Using Attention-Modulated Disintegration and Integration, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4321-4330, ISSN: 1063-6919

CONFERENCE PAPER

Coninx A, Baxter P, Oleari E, Bellini S, Bierman B, Henkemans OB, Canamero L, Cosi P, Enescu V, Espinoza RR, Hiolle A, Humbert R, Kiefer B, Kruijff-Korbayova I, Looije R-M, Mosconi M, Neerincx M, Paci G, Patsis G, Pozzi C, Sacchitelli F, Sahli H, Sanna A, Sommavilla G, Tesser F, Demiris Y, Belpaeme Tet al., 2016, Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users, JOURNAL OF HUMAN-ROBOT INTERACTION, Vol: 5, Pages: 32-67, ISSN: 2163-0364

JOURNAL ARTICLE

Fischer T, Demiris Y, 2016, Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3309-3316, ISSN: 1050-4729

CONFERENCE PAPER

Gao Y, Chang HJ, Demiris Y, 2016, Personalised assistive dressing by humanoid robots using multi-modal information, Workshop on Human-Robot Interfaces for Enhanced Physical Interactions at ICRA

In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement rangemodels and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting tothe force resistances.

CONFERENCE PAPER

Gao Y, Chang HJ, Demiris Y, 2016, Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 4398-4403

CONFERENCE PAPER

Georgiou T, Demiris Y, 2016, Personalised Track Design in Car Racing Games, IEEE Conference on Computational Intelligence and Games (CIG), Publisher: IEEE, ISSN: 2325-4270

CONFERENCE PAPER

Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Cehovin L, Vojir T, Hager G, Lukezic A, Fernandez G, Gupta A, Petrosino A, Memarmoghadam A, Garcia-Martin A, Montero AS, Vedaldi A, Robinson A, Ma AJ, Varfolomieiev A, Alatan A, Erdem A, Ghanem B, Liu B, Han B, Martinez B, Chang C-M, Xu C, Sun C, Kim D, Chen D, Du D, Mishra D, Yeung D-Y, Gundogdu E, Erdem E, Khan F, Porikli F, Zhao F, Bunyak F, Battistone F, Zhu G, Roffo G, Subrahmanyam GRKS, Bastos G, Seetharaman G, Medeiros H, Li H, Qi H, Bischof H, Possegger H, Lu H, Lee H, Nam H, Chang HJ, Drummond I, Valmadre J, Jeong J-C, Cho J-I, Lee J-Y, Zhu J, Feng J, Gao J, Choi JY, Xiao J, Kim J-W, Jeong J, Henriques JF, Lang J, Choi J, Martinez JM, Xing J, Gao J, Palaniappan K, Lebeda K, Gao K, Mikolajczyk K, Qin L, Wang L, Wen L, Bertinetto L, Rapuru MK, Poostchi M, Maresca M, Danelljan M, Mueller M, Zhang M, Arens M, Valstar M, Tang M, Baek M, Khan MH, Wang N, Fan N, Al-Shakarji N, Miksik O, Akin O, Moallem P, Senna P, Torr PHS, Yuen PC, Huang Q, Martin-Nieto R, Pelapur R, Bowden R, Laganiere R, Stolkin R, Walsh R, Krah SB, Li S, Zhang S, Yao S, Hadfield S, Melzi S, Lyu S, Li S, Becker S, Golodetz S, Kakanuru S, Choi S, Hu T, Mauthner T, Zhang T, Pridmore T, Santopietro V, Hu W, Li W, Huebner W, Lan X, Wang X, Li X, Li Y, Demiris Y, Wang Y, Qi Y, Yuan Z, Cai Z, Xu Z, He Z, Chi Zet al., 2016, The Visual Object Tracking VOT2016 Challenge Results, 14th European Conference on Computer Vision (ECCV), Publisher: SPRINGER INT PUBLISHING AG, Pages: 777-823, ISSN: 0302-9743

CONFERENCE PAPER

Petit M, Demiris Y, 2016, Hierarchical Action Learning by Instruction Through Interactive Grounding of Body Parts and Proto-actions, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3375-3382, ISSN: 1050-4729

CONFERENCE PAPER

Petit M, Fischer T, Demiris Y, 2016, Lifelong Augmentation of Multimodal Streaming Autobiographical Memories, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 201-213, ISSN: 2379-8920

JOURNAL ARTICLE

Petit M, Fischer T, Demiris Y, 2016, Towards the Emergence of Procedural Memories from Lifelong Multi-Modal Streaming Memories for Cognitive Robots, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IEEE/RSJ IROS

Various research topics are emerging as the demand for intelligent lifelong interactions between robot and humans increases. Among them, we can find the examination of persistent storage, the continuous unsupervised annotation of memories and the usage of data at high-frequency over long periods of time. We recently proposed a lifelong autobiographical memory architecture tackling some of these challenges, allowing the iCub humanoid robot to 1) create new memories for both actions that are self-executed and observed from humans, 2) continuously annotate these actions in an unsupervised manner, and 3) use reasoning modules to augment these memories a-posteriori. In this paper, we present a reasoning algorithm which generalises the robots’ understanding of actions by finding the point of commonalities with the former ones. In particular, we generated and labelled templates of pointing actions in different directions. This represents a first step towards the emergence of a procedural memory within a long-term autobiographical memory framework for robots.

CONFERENCE PAPER

Ribes A, Cerquides J, Demiris Y, Lopez de Mantaras Ret al., 2016, Active Learning of Object and Body Models with Time Constraints on a Humanoid Robot, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 26-41, ISSN: 2379-8920

JOURNAL ARTICLE

Ros R, Oleari E, Pozzi C, Sacchitelli F, Baranzini D, Bagherzadhalimi A, Sanna A, Demiris Yet al., 2016, A Motivational Approach to Support Healthy Habits in Long-term Child-Robot Interaction, International Journal of Social Robotics, Vol: 8, Pages: 599-617, ISSN: 1875-4791

JOURNAL ARTICLE

Zambelli M, Demiris Y, 2016, Multimodal Imitation using Self-learned Sensorimotor Representations, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3953-3958

CONFERENCE PAPER

Zambelli M, Fischer T, Petit M, Chang HJ, Cully A, Demiris Yet al., 2016, Towards Anchoring Self-Learned Representations to Those of Other Agents, Workshop on Bio-inspired Social Robot Learning in Home Scenarios IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: Institute of Electrical and Electronics Engineers (IEEE)

In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences, 2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discoverscorrespondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent. We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step towards the emergence of a mirror neuron like system based on self-learned representations.

CONFERENCE PAPER

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00333953&limit=30&person=true