Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

185 results found

Demiris Y, 2009, Knowing when to assist: developmental issues in lifelong assistive robotics., Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2009, Publisher: IEEE, Pages: 3357-3360, ISSN: 1557-170X

Children and adults with sensorimotor disabilities can significantly increase their autonomy through the use of assistive robots. As the field progresses from short-term, task-specific solutions to long-term, adaptive ones, new challenges are emerging. In this paper a lifelong methodological approach is presented, that attempts to balance the immediate context-specific needs of the user, with the long-term effects that the robot's assistance can potentially have on the user's developmental trajectory.

CONFERENCE PAPER

Butler S, Demiris Y, 2009, Predicting the Movements of Robot Teams Using Generative Models, International Symposium on Distributed Autonomous Robotic Systems (DARS), Publisher: Springer, Pages: 533-542

When a robot plans its actions within an environment containing multiple robots, it is often necessary to take into account the actions and movements of the other robots to either avoid, counter, or cooperate with them, depending on the scenario. Our predictive system is based on the biologically-inspired, simulation theoretic approach that uses internal generative models in single-robot applications. Here, we move beyond the single-robot case to illustrate how these generative models can predict the movements of the opponent’s robots, when applied to an adversarial scenario involving two robot teams. The system is able to recognise whether the robots are attacking or defending, and the formation they are moving in. It can then predict their future movements based on the recognised model. The results confirm that the speed of recognition and the accuracy of prediction depend on how well the models match the robots’ observed behaviour.

CONFERENCE PAPER

Carlson T, Demiris Y, 2009, Using Visual Attention to Evaluate Collaborative Control Architectures for Human Robot Interaction, AISB'09: New Frontiers in Human-Robot Interaction

Collaborative control architectures assist human users in performing tasks, without undermining their capabilities or curtailing the natural development of their skills. In this study, we evaluate our collaborative control architecture by investigating the visual attention patterns of robotic wheelchair users. Our initial hypothesis stated that the user would require less visual attention for driving, whilst they are being assisted by the collaborative system, thus allowing them to concentrate on higher level cognitive tasks, such as planning. However, our analysis of eye gaze patterns—as recorded by ahead mounted eye tracking system—supports the opposite conclusion: that patterns of saccadic activation increase and become more chaotic under the assisted mode. Our findings highlight the necessity for techniques that assist the user in forming an appropriate mental model of the collaborative control architecture.

CONFERENCE PAPER

Demiris Y, Carlson T, 2009, Lifelong robot-assisted mobility: models, tools, and challenges, IET Conference on Assisted Living 2009, Publisher: IET

Increasing the autonomy of users with disabilities through robot-assisted mobility has the potential of facilitating their sensorimotor and social development, as well as reducing the burden of caring for such populations in both inpatient and outpatient settings. While techniques for task-specific assistance exist, they are largely focused on satisfying short- term goals, utilising stationary user models. For lifelong users and particularly for those with rapidly changing sensorimotor skills (for example very young children), adaptive models that take into consideration these developmental trajectories are becoming very important. In this paper, we present our approach to lifelong user models for robot-assisted mobility, and discuss existing models and tools, as well as challenges that remain ahead.

CONFERENCE PAPER

, 2009,

CONFERENCE PAPER

Tidemann A, Ozturk P, Demiris Y, 2009, A Groovy Virtual Drumming Agent, 9th International Conference on Intelligent Virtual Agents, Publisher: SPRINGER-VERLAG BERLIN, Pages: 104-+, ISSN: 0302-9743

CONFERENCE PAPER

Demiris Y, Meltzoff A, 2008, The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots., Infant and Child Development, Vol: 17, Pages: 43-53, ISSN: 1522-7227

Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.

JOURNAL ARTICLE

, 2008,

JOURNAL ARTICLE

Tidemann A, Demiris Y, 2008, Groovy Neural Networks, 18th European Conference on Artificial Intelligence, Publisher: I O S PRESS, Pages: 271-275, ISSN: 0922-6389

CONFERENCE PAPER

Tidemann A, Demiris Y, 2008, A Drum Machine That Learns to Groove, 31st Annual German Conference on Artificial Intelligence, Publisher: SPRINGER-VERLAG BERLIN, Pages: 144-+, ISSN: 0302-9743

CONFERENCE PAPER

Demiris Y, Khadhouri B, 2008, Content-based control of goal-directed attention during human action perception, Interaction Studies, Vol: 9, Pages: 353-376, ISSN: 1572-0373

During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) (Demiris and Khadhouri, 2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to the content's reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes.

JOURNAL ARTICLE

Carlson T, Demiris Y, 2008, Human-wheelchair collaboration through prediction of intention and adaptive assistance, IEEE International Conference on Robotics and Automation, Publisher: IEEE, Pages: 3926-3931, ISSN: 1050-4729

CONFERENCE PAPER

Takacs B, Demiris Y, 2008, Balancing Spectral Clustering for Segmenting Spatio-Temporal Observations of Multi-Agent Systems, 8th IEEE International Conference on Data Mining, Publisher: IEEE COMPUTER SOC, Pages: 580-587, ISSN: 1550-4786

CONFERENCE PAPER

, 2008,

CONFERENCE PAPER

, 2008,

CONFERENCE PAPER

Demiris Y, Khadhouri B, 2008, Content-based control of goal-directed attention during human action perception, Interaction Studies: social behaviour and communication in biological and artificial systems, Vol: 9, Pages: 353-376

JOURNAL ARTICLE

Takács B, Butler S, Demiris Y, 2007, Multi-agent Behaviour Segmentation via Spectral Clustering, AAAI-2007 Workshop on Plan, Activity and Intention Recognition (PAIR), Publisher: AAAI, Pages: 74-81

We examine the application of spectral clustering for breaking up the behaviour of a multi-agent system in space and time into smaller, independent elements. We extend the clustering into the temporal domain and propose a novel similarity measure, which is shown to possess desirable temporal properties when clustering multi-agent behaviour. We also propose a technique to add knowledge about events of multi-agent interaction with different importance. We apply spectral clustering with this measure for analysing behaviour in a strategic game.

CONFERENCE PAPER

Demiris Y, 2007, Prediction of intent in robotics and multi-agent systems., Cognitive Processing, Vol: 8, Pages: 151-158, ISSN: 1612-4782

Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches.

JOURNAL ARTICLE

Johnson M, Demiris Y, 2007, Visuo-Cognitive Perspective Taking for Action Recognition, AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 262-269

Many excellent architectures exist that allow imitation of actions involving observable goals. In this paper, we develop a Simulation Theory-based architecture that uses continuous visual perspective taking to maintain a persistent model of the demonstrator's knowledge of object locations in dynamic environments; this allows an observer robot to attribute potential actions in the presence of goal occlusions, and predict the unfolding of actions through prediction of visual feedback to the demonstrator. The architecture is tested in robotic experiments, and results show that the approach also allows an observer robot to solve Theory-of-Mind tasks from the 'False Belief' paradigm.

CONFERENCE PAPER

Tidemann A, Demiris Y, 2007, Imitating the Groove: Making Drum Machines more Human, AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 232-240

Current music production software allows rapid programming of drum patterns, but programmed patterns often lack the groove that a human drummer will provide, both in terms of being rhythmically too rigid and having no variation for longer periods of time. We have implemented an artificial software drummer that learns drum patterns by extracting user specific variations played by a human drummer. The artificial drummer then builds up a library of patterns it can use in different musical contexts. The artificial drummer models the groove and the variations of the human drummer, enhancing the realism of the produced patterns.

CONFERENCE PAPER

Demiris Y, Billard A, 2007, Special Issue on Robot Learning by Observation, Demonstration, and Imitation, IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics, Vol: 37, Pages: 254-255, ISSN: 1083-4419

This special issue contains selected extended contributions from both the Adaptation in Artificial and Biological Systems symposium held in Hertforshire in 2006 and the wider academic community following a public call for papers in 2006. The papers presented serve as a good illustration of the challenges faced by robotics researchers today in the field of programming by observation, demonstration, and imitation.

JOURNAL ARTICLE

Dearden A, Demiris Y, Grau O, 2007, Learning models of camera control for imitation in football matches, AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 227-231

In this paper, we present ongoing work towards a system capable of learning from and imitating the movement of a trained cameraman and his director covering a football match. Useful features such as the pitch and the movement of players in the scene are detected using various computer vision techniques. In simulation, a robotic camera trains its own internal model for how it can affect these features. The movement of a real cameraman in an actual football game can be imitated by using this internal model.

CONFERENCE PAPER

Dearden A, Demiris Y, 2007, From exploration to imitation: using learnt internal models to imitate others, AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 218-226

We present an architecture that enables asocial and social learning mechanisms to be combined in a unified framework on a robot. The robot learns two kinds of internal models by interacting with the environment with no a priori knowledge of its own motor system: internal object models are learnt about how its motor system and other objects appear in its sensor data; internal control models are learnt by babbling and represent how the robot controls objects. These asocially-learnt models of the robot’s motor system are used to understand the actions of a human demonstrator on objects that they can both interact with. Knowledge acquired through self-exploration is therefore used as a bootstrapping mechanism to understand others and benefit from their knowledge.

CONFERENCE PAPER

Demiris Y K, 2007, Using Robots to study the mechanisms imitation, Neuroconstructivism: Perspectives and Prospects, Editors: Mareschal, Sirois, Westermann, Publisher: Oxford University Press, Pages: 159-178

BOOK CHAPTER

Demiris Y K, Johnson M, 2007, Simulation Theory for Understanding Others: A Robotics Perspective, Imitation and Social Learning in Robots, Humans and Animals: Behavioural Social and Communicative Dimensions, Pages: 89-102

BOOK CHAPTER

Dearden A, Demiris Y, Grau O, 2006, Tracking football player movement from a single moving camera using particle filters, European Conference on Visual Media Production (CVMP), Publisher: IET, Pages: 29-37

This paper deals with the problem of tracking football players in a football match using data from a single moving camera. Tracking footballers from a single video source is difficult: not only do the football players occlude each other, but they frequently enter and leave the camera's field of view, making initialisation and destruction of a player's tracking a difficult task. The system presented here uses particle filters to track players. The multiple state estimates used by a particle filter provide an elegant method for maintaining tracking of players following an occlusion. Automated tracking can be achieved by creating and stopping particle filters depending on the input player data.

CONFERENCE PAPER

Dearden A, Demiris Y, Grau O, 2006, Tracking football player movement from a single moving camera using particle filters, Pages: 29-37

CONFERENCE PAPER

Demiris Y, Khadhouri B, 2006, Content-Based Control of Goal-Directed Attention During Human Action Perception, Pages: 226-231

CONFERENCE PAPER

Demiris Y, Khadhouri B, 2006, Hierarchical attentive multiple models for execution and recognition of actions, Robotics and Autonomous Systems, Vol: 54, Pages: 361-369, ISSN: 0921-8890

According to the motor theories of perception, the motor systems of an observer are actively involved in the perception of actions when these are performed by a demonstrator. In this paper we review our computational architecture, HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition), where the motor control systems of a robot are organised in a hierarchical, distributed manner, and can be used in the dual role of (a) competitively selecting and executing an action, and (b) perceiving it when performed by a demonstrator. We subsequently demonstrate that such an arrangement can provide a principled method for the top-down control of attention during action perception, resulting in significant performance gains. We assess these performance gains under a variety of resource allocation strategies.

JOURNAL ARTICLE

Simmons G, Demiris Y, 2006, Object Grasping using the Minimum Variance Model, Biological Cybernetics, Vol: 94, Pages: 393-407, ISSN: 0340-1200

Reaching-to-grasp has generally been classified as the coordination of two separate visuomotor processes: transporting the hand to the target object and performing the grip. An alternative view has recently been formed that grasping can be explained as pointing movements performed by the digits of the hand to target positions on the object. We have previously implemented the minimum variance model of human movement as an optimal control scheme suitable for control of a robot arm reaching to a target. Here, we extend that scheme to perform grasping movements with a hand and arm model. Since the minimum variance model requires that signal-dependent noise be present on the motor commands to the actuators of the movement, our approach is to plan the reach and the grasp separately, in line with the classical view, but using the same computational model for pointing, in line with the alternative view. We show that our model successfully captures some of the key characteristics of human grasping movements, including the observations that maximum grip size increases with object size (with a slope of approximately 0.8) and that this maximum grip occurs at 60-80% of the movement time. We then use our model to analyse contributions to the digit end-point variance from the two components of the grasp (the transport and the grip). We also briefly discuss further areas of investigation that are prompted by our model.

JOURNAL ARTICLE

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00333953&limit=30&person=true&page=5&respub-action=search.html