Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

180 results found

Johnson M, Demiris Y, 2005, Perceptual Perspective Taking and Action Recognition, International Journal of Advanced Robotic Systems, Vol: 2, Pages: 301-308, ISSN: 1729-8806

Robots that operate in social environments need to be able to recognise and understand the actions of other robots, and humans, in order to facilitate learning through imitation and collaboration. The success of the simulation theory approach to action recognition and imitation relies on the ability to take the perspective of other people, so as to generate simulated actions from their point of view. In this paper, simulation of visual perception is used to recreate the visual egocentric sensory space and egocentric behaviour space of an observed agent, and through this increase the accuracy of action recognition. To demonstrate the approach, experiments are performed with a robot attributing perceptions to and recognising the actions of a second robot.

JOURNAL ARTICLE

Johnson MR, Demiris YK, 2005, Perspective Taking Through Simulation, Towards Autonomous Robotic Systems (TAROS), Pages: 119-126

Robots that operate among humans need to be able to attribute mental states in order to facilitate learning through imitation and collaboration. The success of the simulation theory approach for attributing mental states to another person relies on the ability to take the perspective of that person, typically by generating pretend states from that person’s point of view. In this paper, internal inverse and forward models are coupled to create simulation processes that may be used for mental state attribution: simulation of the visual process is used to attribute perceptions, and simulation of the motor control process is used to attribute potential actions. To demonstrate the approach, experiments are performed with a robot attributing perceptions and potential actions to a second robot.

CONFERENCE PAPER

Khadhouri B, Demiris Y, 2005, Attention shifts during action sequence recognition for social robots, International Conference on Advanced Robotics, Publisher: IEEE, Pages: 468-475

Human action understanding is an important component of our research towards social robots that can operate among humans. A crucial element of this component is visual attention - where should a robot direct its limited visual and computational resources during the perception of a human action? In this paper, we propose a computational model of an attention mechanism that combines the saliency of top-down elements, based on multiple hypotheses about the demonstrated action, with the saliency of bottom up components. We implement our attention mechanism on a robot, and examine its performance during the observation of object-directed human actions. Furthermore, we propose a method for resetting this model that allows it to work on multiple behaviours observed in a sequence. We also implement and investigate this method's performance on the robot.

CONFERENCE PAPER

Khadhouri B, Demiris Y, 2005, Compound effects of top-down and bottom-up influences on visual attention during action recognition, International Joint Conference on Artificial Intelligence (IJCAI), Publisher: International Joint Conferences on Artificial Intelligence, Pages: 1458-1463

The limited visual and computational resources available during the perception of a human action makes a visual attention mechanism essential. In this paper we propose an attention mechanism that combines the saliency of top-down (or goal-directed) elements, based on multiple hypotheses about the demonstrated action, with the saliency of bottom-up (or stimulus-driven) components. Furthermore, we use the bottom-up part to initialise the top-down, hence resulting in a selection of the behaviours that rightly require the limited computational resources. This attention mechanism is then combined with an action understanding model and implemented on a robot, where we examine its performance during the observation of object-directed human actions.

CONFERENCE PAPER

Khadhouri B, Demiris Y, 2005, Attention shifts during action sequence recognition for social robots, New York, 12th international conference on advanced robotics, 17 - 20 July 2005, Seattle, WA, Publisher: Ieee, Pages: 468-475

CONFERENCE PAPER

Simmons G, Demiris Y, 2005, Optimal robot arm control using the minimum variance model, Journal of Robotic Systems, Vol: 22, Pages: 677-690, ISSN: 0741-2223

Models of human movement from computational neuroscience provide a starting point for building a system that can produce flexible adaptive movement on a robot. There have been many computational models of human upper limb movement put forward, each attempting to explain one or more of the stereotypical features that characterize such movements. While these models successfully capture some of the features of human movement, they often lack a compelling biological basis for the criteria they choose to optimize. One that does provide such a basis is the minimum variance model (and its extension—task optimization in the presence of signal-dependent noise). Here, the variance of the hand position at the end of a movement is minimized, given that the control signals on the arm's actuators are subject to random noise with zero mean and variance proportional to the amplitude of the signal. Since large control signals, required to move fast, would have higher amplitude noise, the speed-accuracy trade-off emerges as a direct result of the optimization process. We chose to implement a version of this model that would be suitable for the control of a robot arm, using an optimal control scheme based on the discrete-time linear quadratic regulator. This implementation allowed us to examine the applicability of the minimum variance model to producing humanlike movement. In this paper, we describe our implementation of the minimum variance model, both for point-to-point reaching movements and for more complex trajectories involving via points. We also evaluate its performance in producing humanlike movement and show its advantages over other optimization based models (the well-known minimum jerk and minimum torque-change models) for the control of a robot arm.

JOURNAL ARTICLE

Veskos P, Demiris Y, 2005, Robot Swinging Using van der Pol Nonlinear Oscillators, International Symposium on Adaptive Motion of Animals and Machines

In this study, we investigated the use of van der Pol oscillators in a 2-dof embodied robotic platform for a swinging task. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system.

CONFERENCE PAPER

Veskos P, Demiris Y, 2005, Developmental acquisition of entrainment skills in robot swinging using van der Pol oscillators, International Workshop On Epigenetic Robotics, Pages: 87-93

In this study we investigated the effects of different morphological configurations on a robot swinging task using van der Pol oscillators. The task was examined using two separate degrees of freedom (DoF), both in the presence and absence of neural entrainment. Neural entrainment stabilises the system, reduces time-to-steady state and relaxes the requirement for a strong coupling with the environment in order to achieve mechanical entrainment. It was found that staged release of the distal DoF does not have any benefits over using both DoF from the onset of the experimentation. On the contrary, it is less efficient, both with respect to the time needed to reach a stable oscillatory regime and the maximum amplitude it can achieve. The same neural architecture is successful in achieving neuromechanical entrainment for a robotic walking task.

CONFERENCE PAPER

Johnson M, Demiris Y, 2004, Abstraction in Recognition to Solve the Correspondence Problem for Robot Imitation, Towards Autonomous Robotic Systems, TAROS 2004, Pages: 63-70

A considerable part of the imitation problem is finding mechanisms that link the recognition of actions that are being demonstrated to the execution of the same actions by the imitator. In a situation where a human is instructing a robot, the problem is made more complicated by the difference in morphology. In this paper we present an imitation framework that allows a robot to recognise and imitate object-directed actions performed by a human demonstrator by solving the correspondence problem. The recognition is achieved using an abstraction mechanism that focuses on the features of the demonstration that are important to the imitator. The abstraction mechanism is applied to experimental scenarios in which a robot imitates human- demonstrated tasks of transporting objects be- tween tables.

CONFERENCE PAPER

Simmons G, Demiris Y, 2004, Imitation of human demonstration using a biologically inspired modular optimal control scheme, New York, IEEE/RAS International Conference on Humanoid Robots, Publisher: IEEE, Pages: 215-234

Download Citation Email Print Request Permissions Save to ProjectProgress in the field of humanoid robotics and the need to find simpler ways to program such robots has prompted research into computational models for robotic learning from human demonstration. To further investigate biologically inspired human-like robotic movement and imitation, we have constructed a framework based on three key features of human movement and planning: optimality, modularity and learning. In this paper we describe a computational motor system, based on the minimum variance model of human movement, that uses optimality principles to produce human-like movement in a robot arm. Within this motor system different movements are represented in a modular structure. When the system observes a demonstrated movement, the motor system uses these modules to produce motor commands which are used to update an internal state representation. This is used so that the system can recognize known movements and move the robot arm accordingly, or extract key features from the demonstrated movement and use them to learn a new module. The active involvement of the motor system in the recognition and learning of observed movements has its theoretical basis in the direct matching hypothesis and the use of a model for human-like movement allows the system to learn from human demonstration.

CONFERENCE PAPER

Simmons G, Demiris Y, 2004, Biologically inspired optimal robot arm control with signal-dependent noise, IEEE/RSJ International Conference on Intelligent Robots and Systems, Pages: 491-496

Progress in the field of humanoid robotics and the need to find simpler ways to program such robots has prompted research into computational models for robotic learning from human demonstration. To further investigate biologically inspired human-like robotic movement and imitation, we have constructed a framework based on three key features of human movement and planning: optimality, modularity and learning. In this paper we focus on the application of optimality principles to the production of human-like movement by a robot arm. Among computational theories of human movement, the signal-dependent noise, or minimum variance, model was chosen as a biologically realistic control scheme to produce human-like movement. A well known optimal control algorithm, the linear quadratic regulator, was adapted to implement this model. The scheme was applied both in simulation and on a real robot arm, which demonstrated human-like movement profiles in a point-to-point reaching experiment.

CONFERENCE PAPER

Demiris Y, Johnson M, 2003, Distributed, predictive perception of actions: a biologically inspired robotics architecture for imitation and learning, Connection Science, Vol: 15, Pages: 231-243, ISSN: 0954-0091

One of the most important abilities for an agent's cognitive development in a social environment is the ability to recognize and imitate actions of others. In this paper we describe a cognitive architecture for action recognition and imitation, and present experiments demonstrating its implementation in robots. Inspired by neuroscientific and psychological data, and adopting a ‘simulation theory of mind’ approach, the architecture uses the motor systems of the imitator in a dual role, both for generating actions, and for understanding actions when performed by others. It consists of a distributed system of inverse and forward models that uses prediction accuracy as a means to classify demonstrated actions. The architecture is also shown to be capable of learning new composite actions from demonstration.

JOURNAL ARTICLE

Eneje E, Demiris Y, 2003, Towards Robot Intermodal Matching Using Spiking Neurons, IROS'03 Workshop on Programming by Demonstration, Pages: 95-99

For a robot to successfully learn from demonstration it must posses the ability to reproduce the actions of a teacher. For this to happen, the robot must generate motor signals to match its proprioceptively perceived state with that of the visually perceived state of a teacher. In this paper we describe a real time matching model at a neural level of description. Experimental results from matching of arm movements, using dynamically simulated articulated robots, are presented.

CONFERENCE PAPER

Johnson M, Demiris Y, 2003, An integrated rapid development environment for computer-aided robot design and simulation, Bury St Edmunds, International Conference on Mechatronics, ICOM 2003, Publisher: Wiley, Pages: 485-490

We present our work towards the development of a rapid prototyping integrated environment for the design and dynamical simulation of multibody robotic systems. Subsequently, we demonstrate its current functionality in a case study involving the construction of a 130 DoF humanoid robot that attempts to closely match human motion capabilities. The modelling system relies exclusively on open-source software libraries thus offering high levels of customization and extensibility to the end-user.

CONFERENCE PAPER

Prince CG, Demiris Y, 2003, Editorial: Introduction to the special issue on epigenetic robotics, Adaptive Behaviour, Vol: 11, Pages: 75-77, ISSN: 1059-7123

JOURNAL ARTICLE

Demiris Y, 2002, Mirror neurons, imitation and the learning of movement sequences, Singapore, 9th international conference on neural information processing (ICONIP), Singapore, Singapore, 18 - 22 November 2002, Publisher: Nanyang Technological Univ, Pages: 111-115

We draw inspiration from properties of "mirror" neurons discovered in the macaque monkey brain area F5, to design and implement a distributed behaviour-based architecture that equips robots with movement imitation abilities. We combine this generative route with a learning route, and demonstrate how new composite behaviours that exhibit mirror neuron like properties can be learned from demonstration.

CONFERENCE PAPER

Demiris Y, 2002, Biologically inspired robot imitation mechanisms and their application as models of mirror neurons, Proceedings of EPSRC/BBSRC workshop on biologically inspired robotics, Pages: 126-133

CONFERENCE PAPER

Demiris Y, Hayes G, 2002, Imitation as a dual-route process featuring predictive and learning components: a biologically plausible computational model, Imitation in animals and artifacts, Editors: Dautenhahn, Nehaniv, Cambridge, Massachussetts, Publisher: MIT Press, Pages: 327-361, ISBN: 9780262042031

BOOK CHAPTER

Balkenius C, Prince C, Demiris Y, Marom Y, Kozima Het al., 2001, Proceedings of the first international workshop on epigenetic robotics: modeling cognitive development in robotic systems, Lund, Publisher: Lund University, ISBN: 9789163114656

BOOK

, 2000, Advances in Robot Learning, 8th European Workshop on Learning Robots, EWLR-8, Lausanne, Switzerland, September 18, 1999, Proceedings, Publisher: Springer

CONFERENCE PAPER

, 1998, Learning Robots, 6th European Workshop, EWLR-6, Brighton, England, UK, August 1-2, 1997, Proceedings, Publisher: Springer

CONFERENCE PAPER

Demiris Y, Hayes G, 1997, Do Robots Ape?, AAAI Fall Symposium on Socially Intelligent Agents, Publisher: AAAI, Pages: 28-30

Within the context of two sets of robotic experiments we have performed, we examine some representational and algorithmic issues that need to be addressed in order to equip robots with the capacity to imitate. We suggest that some of the di culties might be eased by placing imitation architectures within a wider social context.

CONFERENCE PAPER

Klingspor V, Demiris J, Kaiser M, 1997, Human-robot communication and machine learning, APPLIED ARTIFICIAL INTELLIGENCE, Vol: 11, Pages: 719-746, ISSN: 0883-9514

JOURNAL ARTICLE

Klingspor V, Demiris Y, Kaiser M, 1997, Human Robot Communication and Machine Learning, Applied Artificial Intelligence, Vol: 11, Pages: 719-746

Human-Robot Interaction and especially Human-Robot Communication (HRC) is of primary importance for the development of robots that operate outside production lines and cooperate with humans. In this paper, we review the state of the art and discuss two complementary aspects of the role machine learning plays in HRC. First, we show how communication itself can benefit from learning, e.g. by building human-understandable symbols from a robot’s perceptions and actions. Second, we investigate the power of non-verbal communication and imitation learning mechanisms for robot programming.

JOURNAL ARTICLE

DEMIRIS J, 1994, EXPERIMENTS TOWARDS ROBOTIC LEARNING BY IMITATION, 12th National Conference on Artificial Intelligence, Publisher: M I T PRESS, Pages: 1439-1439

CONFERENCE PAPER

Cully A, Demiris Y, Hierarchical Behavioral Repertoires with Unsupervised Descriptors

Enabling artificial agents to automatically learn complex, versatile andhigh-performing behaviors is a long-lasting challenge. This paper presents astep in this direction with hierarchical behavioral repertoires that stackseveral behavioral repertoires to generate sophisticated behaviors. Eachrepertoire of this architecture uses the lower repertoires to create complexbehaviors as sequences of simpler ones, while only the lowest repertoiredirectly controls the agent's movements. This paper also introduces a novelapproach to automatically define behavioral descriptors thanks to anunsupervised neural network that organizes the produced high-level behaviors.The experiments show that the proposed architecture enables a robot to learnhow to draw digits in an unsupervised manner after having learned to draw linesand arcs. Compared to traditional behavioral repertoires, the proposedarchitecture reduces the dimensionality of the optimization problems by ordersof magnitude and provides behaviors with a twice better fitness. Moreimportantly, it enables the transfer of knowledge between robots: ahierarchical repertoire evolved for a robotic arm to draw digits can betransferred to a humanoid robot by simply changing the lowest layer of thehierarchy. This enables the humanoid to draw digits although it has never beentrained for this task.

CONFERENCE PAPER

Goncalves Nunes U, Demiris Y, 3D Motion Segmentation of Articulated Rigid Bodies based on RGB-D Data, British Machine Vision Conference

CONFERENCE PAPER

Nguyen P, Fischer T, Chang HJ, Pattacini U, Metta G, Demiris Yet al., Transferring visuomotor learning from simulation to the real world for robotics manipulation tasks, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE

Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub's hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot's joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

CONFERENCE PAPER

Wang R, Amadori P, Demiris Y, Real-Time Workload Classification during Driving using HyperNetworks, International Conference on Intelligent Robots and Systems, ISSN: 2153-0866

Classifying human cognitive states from behavioral and physiological signals is a challenging problem with important applications in robotics. The problem is challenging due to the data variability among individual users, and sensor artifacts. In this work, we propose an end-to-end framework for real-time cognitive workload classification with mixture Hyper Long Short Term Memory Networks (m-HyperLSTM), a novelvariant of HyperNetworks. Evaluating the proposed approach on an eye-gaze pattern dataset collected from simulated driving scenarios of different cognitive demands, we show that the proposed framework outperforms previous baseline methods and achieves 83.9% precision and 87.8% recall during test. We also demonstrate the merit of our proposed architecture by showing improved performance over other LSTM-basedmethods

CONFERENCE PAPER

Zolotas M, Elsdon J, Demiris Y, Head-mounted augmented reality for explainable robotic wheelchair assistance, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

Robotic wheelchairs with built-in assistive fea-tures, such as shared control, are an emerging means ofproviding independent mobility to severely disabled individuals.However, patients often struggle to build a mental model oftheir wheelchair’s behaviour under different environmentalconditions. Motivated by the desire to help users bridge thisgap in perception, we propose a novel augmented realitysystem using a Microsoft Hololens as a head-mounted aid forwheelchair navigation. The system displays visual feedback tothe wearer as a way of explaining the underlying dynamicsof the wheelchair’s shared controller and its predicted futurestates. To investigate the influence of different interface designoptions, a pilot study was also conducted. We evaluated theacceptance rate and learning curve of an immersive wheelchairtraining regime, revealing preliminary insights into the potentialbeneficial and adverse nature of different augmented realitycues for assistive navigation. In particular, we demonstrate thatcare should be taken in the presentation of information, witheffort-reducing cues for augmented information acquisition (forexample, a rear-view display) being the most appreciated.

CONFERENCE PAPER

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00333953&limit=30&person=true&page=6&respub-action=search.html