Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Fong T, Dautenhahn K, Scheutz M, Demiris Yet al., 2008,

    The Third International Conference on human-robot interaction

    , AI Magazine, Vol: 29, Pages: 77-78, ISSN: 0738-4602

    The third international conference on Human-Robot Interaction (HRI-2008) was held in Amsterdam, The Netherlands, March 12-15, 2008. The theme of HRI-2008, "living with robots," highlights the importance of the technical and social issues underlying human-robot interaction with companion and assistive robots for long-term use in everyday life and work activities. More than 250 researchers, practitioners, and exhibitors attended the conference, and many more contributed to the conference as authors or reviewers. HRI-2009 will be held in San Diego, California, from March 11-13, 2009. Copyright © 2008, Association for the Advancement of Artificial Intelligence. All rights reserved.

  • Journal article
    Demiris Y, Meltzoff A, 2008,

    The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots.

    , Infant and Child Development, Vol: 17, Pages: 43-53, ISSN: 1522-7227

    Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.

  • Journal article
    Demiris Y, Khadhouri B, 2008,

    Content-based control of goal-directed attention during human action perception

    , Interaction Studies, Vol: 9, Pages: 353-376, ISSN: 1572-0373

    During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) (Demiris and Khadhouri, 2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to the content's reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes.

  • Conference paper
    , 2008,

    Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, HRI 2008, Amsterdam, The Netherlands, March 12-15, 2008

    , Publisher: ACM
  • Conference paper
    Tidemann A, Demiris Y, 2008,

    Groovy Neural Networks

    , 18th European Conference on Artificial Intelligence, Publisher: I O S PRESS, Pages: 271-275, ISSN: 0922-6389
  • Conference paper
    Tidemann A, Demiris Y, 2008,

    A Drum Machine That Learns to Groove

    , 31st Annual German Conference on Artificial Intelligence, Publisher: SPRINGER-VERLAG BERLIN, Pages: 144-+, ISSN: 0302-9743
  • Conference paper
    Moreno JC, Pons JL, Rocon E, Demiris Yet al., 2008,

    A hybrid method based on fuzzy inference and non-linear oscillators for real-time control of gait

    , 1st International Conference on Bio-Inspired Systems and Signal Processing, Publisher: INSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION, Pages: 44-51
  • Journal article
    Demiris Y, Khadhouri B, 2008,

    Content-based control of goal-directed attention during human action perception

    , Interaction Studies: social behaviour and communication in biological and artificial systems, Vol: 9, Pages: 353-376
  • Conference paper
    Takacs B, Demiris Y, 2008,

    Balancing Spectral Clustering for Segmenting Spatio-Temporal Observations of Multi-Agent Systems

    , 8th IEEE International Conference on Data Mining, Publisher: IEEE COMPUTER SOC, Pages: 580-587, ISSN: 1550-4786
  • Conference paper
    Carlson T, Demiris Y, 2008,

    Human-wheelchair collaboration through prediction of intention and adaptive assistance

    , IEEE International Conference on Robotics and Automation, Publisher: IEEE, Pages: 3926-3931, ISSN: 1050-4729
  • Conference paper
    Takács B, Butler S, Demiris Y, 2007,

    Multi-agent Behaviour Segmentation via Spectral Clustering

    , AAAI-2007 Workshop on Plan, Activity and Intention Recognition (PAIR), Publisher: AAAI, Pages: 74-81

    We examine the application of spectral clustering for breaking up the behaviour of a multi-agent system in space and time into smaller, independent elements. We extend the clustering into the temporal domain and propose a novel similarity measure, which is shown to possess desirable temporal properties when clustering multi-agent behaviour. We also propose a technique to add knowledge about events of multi-agent interaction with different importance. We apply spectral clustering with this measure for analysing behaviour in a strategic game.

  • Journal article
    Demiris Y, 2007,

    Prediction of intent in robotics and multi-agent systems.

    , Cognitive Processing, Vol: 8, Pages: 151-158, ISSN: 1612-4782

    Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches.

  • Conference paper
    Dearden A, Demiris Y, Grau O, 2007,

    Learning models of camera control for imitation in football matches

    , AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 227-231

    In this paper, we present ongoing work towards a system capable of learning from and imitating the movement of a trained cameraman and his director covering a football match. Useful features such as the pitch and the movement of players in the scene are detected using various computer vision techniques. In simulation, a robotic camera trains its own internal model for how it can affect these features. The movement of a real cameraman in an actual football game can be imitated by using this internal model.

  • Conference paper
    Tidemann A, Demiris Y, 2007,

    Imitating the Groove: Making Drum Machines more Human

    , AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 232-240

    Current music production software allows rapid programming of drum patterns, but programmed patterns often lack the groove that a human drummer will provide, both in terms of being rhythmically too rigid and having no variation for longer periods of time. We have implemented an artificial software drummer that learns drum patterns by extracting user specific variations played by a human drummer. The artificial drummer then builds up a library of patterns it can use in different musical contexts. The artificial drummer models the groove and the variations of the human drummer, enhancing the realism of the produced patterns.

  • Conference paper
    Dearden A, Demiris Y, 2007,

    From exploration to imitation: using learnt internal models to imitate others

    , AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 218-226

    We present an architecture that enables asocial and social learning mechanisms to be combined in a unified framework on a robot. The robot learns two kinds of internal models by interacting with the environment with no a priori knowledge of its own motor system: internal object models are learnt about how its motor system and other objects appear in its sensor data; internal control models are learnt by babbling and represent how the robot controls objects. These asocially-learnt models of the robot’s motor system are used to understand the actions of a human demonstrator on objects that they can both interact with. Knowledge acquired through self-exploration is therefore used as a bootstrapping mechanism to understand others and benefit from their knowledge.

  • Conference paper
    Johnson M, Demiris Y, 2007,

    Visuo-Cognitive Perspective Taking for Action Recognition

    , AISB'07: Artificial and Ambient Intelligence, Publisher: AISB, Pages: 262-269

    Many excellent architectures exist that allow imitation of actions involving observable goals. In this paper, we develop a Simulation Theory-based architecture that uses continuous visual perspective taking to maintain a persistent model of the demonstrator's knowledge of object locations in dynamic environments; this allows an observer robot to attribute potential actions in the presence of goal occlusions, and predict the unfolding of actions through prediction of visual feedback to the demonstrator. The architecture is tested in robotic experiments, and results show that the approach also allows an observer robot to solve Theory-of-Mind tasks from the 'False Belief' paradigm.

  • Journal article
    Demiris Y, Billard A, 2007,

    Special Issue on Robot Learning by Observation, Demonstration, and Imitation

    , IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics, Vol: 37, Pages: 254-255, ISSN: 1083-4419

    This special issue contains selected extended contributions from both the Adaptation in Artificial and Biological Systems symposium held in Hertforshire in 2006 and the wider academic community following a public call for papers in 2006. The papers presented serve as a good illustration of the challenges faced by robotics researchers today in the field of programming by observation, demonstration, and imitation.

  • Book chapter
    Demiris Y K, Johnson M, 2007,

    Simulation Theory for Understanding Others: A Robotics Perspective

    , Imitation and Social Learning in Robots, Humans and Animals: Behavioural Social and Communicative Dimensions, Pages: 89-102
  • Book chapter
    Demiris Y K, 2007,

    Using Robots to study the mechanisms imitation

    , Neuroconstructivism: Perspectives and Prospects, Editors: Mareschal, Sirois, Westermann, Publisher: Oxford University Press, Pages: 159-178
  • Conference paper
    Chinellato E, Demiris Y, Del Pobil AP, 2006,

    Studying the human visual cortex for improving prehension capabilities in robotics

    , Pages: 184-189

    Although other primates have grasping skills, human beings evolved theirs to the extent that a large fraction of our brain is involved in grasping actions. Recent neuroscience findings allow us to depict the outline of a model of visionbased grasp planning that differentiates from the previous ones in that it is the first to rest mainly, if not exclusively, on human physiology. The main theory on which our proposal is based is that of the two streams of the human visual cortex [1]. Although they are evolved for different purposes, being the ventral stream dedicated to perceptual vision, and the dorsal stream to action-oriented vision, they need to collaborate in order to allow proper interaction of human beings with the world. Our framework has been conceived to be applied on a robotic setup, and the design of the different brain areas has been performed taking into account not only biological plausibility, but also practical issues related to engineering constraints.

  • Conference paper
    Dearden A, Demiris Y, Grau O, 2006,

    Tracking football player movement from a single moving camera using particle filters

    , Pages: 29-37
  • Conference paper
    Dearden A, Demiris Y, Grau O, 2006,

    Tracking football player movement from a single moving camera using particle filters

    , European Conference on Visual Media Production (CVMP), Publisher: IET, Pages: 29-37

    This paper deals with the problem of tracking football players in a football match using data from a single moving camera. Tracking footballers from a single video source is difficult: not only do the football players occlude each other, but they frequently enter and leave the camera's field of view, making initialisation and destruction of a player's tracking a difficult task. The system presented here uses particle filters to track players. The multiple state estimates used by a particle filter provide an elegant method for maintaining tracking of players following an occlusion. Automated tracking can be achieved by creating and stopping particle filters depending on the input player data.

  • Conference paper
    Demiris Y, Khadhouri B, 2006,

    Content-Based Control of Goal-Directed Attention During Human Action Perception

    , Pages: 226-231
  • Journal article
    Demiris Y, Khadhouri B, 2006,

    Hierarchical attentive multiple models for execution and recognition of actions

    , Robotics and Autonomous Systems, Vol: 54, Pages: 361-369, ISSN: 0921-8890

    According to the motor theories of perception, the motor systems of an observer are actively involved in the perception of actions when these are performed by a demonstrator. In this paper we review our computational architecture, HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition), where the motor control systems of a robot are organised in a hierarchical, distributed manner, and can be used in the dual role of (a) competitively selecting and executing an action, and (b) perceiving it when performed by a demonstrator. We subsequently demonstrate that such an arrangement can provide a principled method for the top-down control of attention during action perception, resulting in significant performance gains. We assess these performance gains under a variety of resource allocation strategies.

  • Conference paper
    Veskos P, Demiris Y, 2006,

    Neuro-mechanical entrainment in a bipedal robotic walking platform

    , AISB'06: Adaptation in Artificial and Biological Systems, Publisher: AISB, Pages: 78-84

    In this study, we investigated the use of van der Pol oscillators in a 4-dof embodied bipedal robotic platform for the purposes of planar walking. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system. Lowering its oscillation frequency resulted in an increase to the walking pace, indicating exploitation of the global natural dynamics. This is verified by its operation in absence of entrainment, where faster limb motion results in a slower overall walking pace.

  • Journal article
    Simmons G, Demiris Y, 2006,

    Object Grasping using the Minimum Variance Model

    , Biological Cybernetics, Vol: 94, Pages: 393-407, ISSN: 0340-1200

    Reaching-to-grasp has generally been classified as the coordination of two separate visuomotor processes: transporting the hand to the target object and performing the grip. An alternative view has recently been formed that grasping can be explained as pointing movements performed by the digits of the hand to target positions on the object. We have previously implemented the minimum variance model of human movement as an optimal control scheme suitable for control of a robot arm reaching to a target. Here, we extend that scheme to perform grasping movements with a hand and arm model. Since the minimum variance model requires that signal-dependent noise be present on the motor commands to the actuators of the movement, our approach is to plan the reach and the grasp separately, in line with the classical view, but using the same computational model for pointing, in line with the alternative view. We show that our model successfully captures some of the key characteristics of human grasping movements, including the observations that maximum grip size increases with object size (with a slope of approximately 0.8) and that this maximum grip occurs at 60-80% of the movement time. We then use our model to analyse contributions to the digit end-point variance from the two components of the grasp (the transport and the grip). We also briefly discuss further areas of investigation that are prompted by our model.

  • Journal article
    Demiris Y, Simmons G, 2006,

    Perceiving the unusual: temporal properties of hierarchical motor representations for action perception

    , Neural Networks, Vol: 19, Pages: 272-284, ISSN: 0893-6080

    Recent computational approaches to action imitation have advocated the use of hierarchical representations in the perception and imitation of demonstrated actions. Hierarchical representations present several advantages, with the main one being their ability to process information at multiple levels of detail. However, the nature of the hierarchies in these approaches has remained relatively unsophisticated, and their relation with biological evidence has not been investigated in detail, in particular with respect to the timing of movements. Following recent neuroscience work on the modulation of the premotor mirror neuron activity during the observation of unpredictable grasping movements, we present here an implementation of our HAMMER architecture using the minimum variance model for implementing reaching and grasping movements that have biologically plausible trajectories. Subsequently, we evaluate the performance of our model in matching the temporal dynamics of the modulation of cortical excitability during the passive observation of normal and unpredictable movements of human demonstrators.

  • Conference paper
    Dearden A, Demiris Y, 2006,

    Active learning of probabilistic forward models in visuo-motor development

    , AISB'06: Adaptation in Artificial and Biological Systems, Publisher: AISB, Pages: 176-183

    Forward models enable both robots and humans to predict the sensory consequences of their motor actions. To learn its own forward models a robot needs to experiment with its own motor system, in the same way that human infants need to babble as a part of their motor development. In this paper we investigate how this babbling with the motor system can be influenced by the forward models’ own knowledge of their predictive ability. By spending more time babbling in regions of motor space that require more accuracy in the forward model, the learning time can be reduced. The key to guiding this exploration is the use of probabilistic forward models, which are capable of learning and predicting not just the sensory consequence of a motor command, but also an estimate of how accurate this prediction is. An experiment was carried out to test this theory on a robotic pan tilt camera.

  • Conference paper
    Veskos P, Demiris Y, 2006,

    Experimental comparison of the van der Pol and Rayleigh nonlinear oscillators for a robotic swinging task

    , AISB'06: Adaptation in Artificial and Biological Systems, Publisher: AISB, Pages: 197-202

    In this paper, the effects of different lower-level building blocks of a robotic swinging system are explored, from the perspective of motor skill acquisition. The van der Pol and Rayleigh oscillators are used to entrain to the system’s natural dynamics, with two different network topologies being used: a symmetric and a hierarchical one. Rayleigh outperformed van der Pol regarding maximum oscillation amplitudes for every morphological configuration examined. However, van der Pol started large amplitude relaxation oscillations faster, attaining better performance during the first half of the transient period. Hence, even though there are great similarities between the oscillators, differences in their resultant behaviours are more pronounced than originally expected.

  • Book chapter
    Chinellato E, Demiris Y, del Pobil AP, 2006,

    Studying the human visual cortex for achieving action-perception coordination with robots

    , Artificial Intelligence and Soft Computing, Editors: del Pobil, Publisher: Acta Press, Anaheim, CF, USA, Pages: 184-189
  • Journal article
    Johnson M, Demiris Y, 2005,

    Perceptual Perspective Taking and Action Recognition

    , International Journal of Advanced Robotic Systems, Vol: 2, Pages: 301-308, ISSN: 1729-8806

    Robots that operate in social environments need to be able to recognise and understand the actions of other robots, and humans, in order to facilitate learning through imitation and collaboration. The success of the simulation theory approach to action recognition and imitation relies on the ability to take the perspective of other people, so as to generate simulated actions from their point of view. In this paper, simulation of visual perception is used to recreate the visual egocentric sensory space and egocentric behaviour space of an observed agent, and through this increase the accuracy of action recognition. To demonstrate the approach, experiments are performed with a robot attributing perceptions to and recognising the actions of a second robot.

  • Journal article
    Simmons G, Demiris Y, 2005,

    Optimal robot arm control using the minimum variance model

    , Journal of Robotic Systems, Vol: 22, Pages: 677-690, ISSN: 0741-2223

    Models of human movement from computational neuroscience provide a starting point for building a system that can produce flexible adaptive movement on a robot. There have been many computational models of human upper limb movement put forward, each attempting to explain one or more of the stereotypical features that characterize such movements. While these models successfully capture some of the features of human movement, they often lack a compelling biological basis for the criteria they choose to optimize. One that does provide such a basis is the minimum variance model (and its extension—task optimization in the presence of signal-dependent noise). Here, the variance of the hand position at the end of a movement is minimized, given that the control signals on the arm's actuators are subject to random noise with zero mean and variance proportional to the amplitude of the signal. Since large control signals, required to move fast, would have higher amplitude noise, the speed-accuracy trade-off emerges as a direct result of the optimization process. We chose to implement a version of this model that would be suitable for the control of a robot arm, using an optimal control scheme based on the discrete-time linear quadratic regulator. This implementation allowed us to examine the applicability of the minimum variance model to producing humanlike movement. In this paper, we describe our implementation of the minimum variance model, both for point-to-point reaching movements and for more complex trajectories involving via points. We also evaluate its performance in producing humanlike movement and show its advantages over other optimization based models (the well-known minimum jerk and minimum torque-change models) for the control of a robot arm.

  • Conference paper
    Johnson MR, Demiris YK, 2005,

    Perspective Taking Through Simulation

    , Towards Autonomous Robotic Systems (TAROS), Pages: 119-126

    Robots that operate among humans need to be able to attribute mental states in order to facilitate learning through imitation and collaboration. The success of the simulation theory approach for attributing mental states to another person relies on the ability to take the perspective of that person, typically by generating pretend states from that person’s point of view. In this paper, internal inverse and forward models are coupled to create simulation processes that may be used for mental state attribution: simulation of the visual process is used to attribute perceptions, and simulation of the motor control process is used to attribute potential actions. To demonstrate the approach, experiments are performed with a robot attributing perceptions and potential actions to a second robot.

  • Conference paper
    Veskos P, Demiris Y, 2005,

    Robot Swinging Using van der Pol Nonlinear Oscillators

    , International Symposium on Adaptive Motion of Animals and Machines

    In this study, we investigated the use of van der Pol oscillators in a 2-dof embodied robotic platform for a swinging task. The oscillator controlled the hip and knee joints of the robot and was capable of generating waveforms with the correct frequency and phase so as to entrain with the mechanical system.

  • Conference paper
    Khadhouri B, Demiris Y, 2005,

    Compound effects of top-down and bottom-up influences on visual attention during action recognition

    , International Joint Conference on Artificial Intelligence (IJCAI), Publisher: International Joint Conferences on Artificial Intelligence, Pages: 1458-1463

    The limited visual and computational resources available during the perception of a human action makes a visual attention mechanism essential. In this paper we propose an attention mechanism that combines the saliency of top-down (or goal-directed) elements, based on multiple hypotheses about the demonstrated action, with the saliency of bottom-up (or stimulus-driven) components. Furthermore, we use the bottom-up part to initialise the top-down, hence resulting in a selection of the behaviours that rightly require the limited computational resources. This attention mechanism is then combined with an action understanding model and implemented on a robot, where we examine its performance during the observation of object-directed human actions.

  • Conference paper
    Dearden A, Demiris YK, 2005,

    Learning forward models for robots

    , International Joint Conference on Artificial Intelligence (IJCAI), Publisher: International Joint Conferences on Artificial Intelligence, Pages: 1440-1445

    Forward models enable a robot to predict the effects of its actions on its own motor system and its environment. This is a vital aspect of intelligent behaviour, as the robot can use predictions to decide the best set of actions to achieve a goal. The ability to learn forward models enables robots to be more adaptable and autonomous; this paper describes a system whereby they can be learnt and represented as a Bayesian network. The robot’s motor system is controlled and explored using 'motor babbling'. Feedback about its motor system comes from computer vision techniques requiring no prior information to perform tracking. The learnt forward model can be used by the robot to imitate human movement.

  • Conference paper
    Demiris Y, Dearden A, 2005,

    From motor babbling to hierarchical learning by imitation: a robot developmental pathway

    , International Workshop on Epigenetic Robotics, Pages: 31-37

    How does an individual use the knowledge acquired through self exploration as a manipulable model through which to understand others and benefit from their knowledge? How can developmental and social learning be combined for their mutual benefit? In this paper we review a hierarchical architecture (HAMMER) which allows a principled way for combining knowledge through exploration and knowledge from others, through the creation and use of multiple inverse and forward models. We describe how Bayesian Belief Networks can be used to learn the association between a robot’s motor commands and sensory consequences (forward models), and how the inverse association can be used for imitation. Inverse models created through self exploration, as well as those from observing others can coexist and compete in a principled unified framework, that utilises the simulation theory of mind approach to mentally rehearse and understand the actions of others.

  • Conference paper
    Veskos P, Demiris Y, 2005,

    Developmental acquisition of entrainment skills in robot swinging using van der Pol oscillators

    , International Workshop On Epigenetic Robotics, Pages: 87-93

    In this study we investigated the effects of different morphological configurations on a robot swinging task using van der Pol oscillators. The task was examined using two separate degrees of freedom (DoF), both in the presence and absence of neural entrainment. Neural entrainment stabilises the system, reduces time-to-steady state and relaxes the requirement for a strong coupling with the environment in order to achieve mechanical entrainment. It was found that staged release of the distal DoF does not have any benefits over using both DoF from the onset of the experimentation. On the contrary, it is less efficient, both with respect to the time needed to reach a stable oscillatory regime and the maximum amplitude it can achieve. The same neural architecture is successful in achieving neuromechanical entrainment for a robotic walking task.

  • Conference paper
    Johnson M, Demiris Y, 2005,

    Hierarchies of Coupled Inverse and Forward Models for Abstraction in Robot Action Planning, Recognition and Imitation

    , International Symposium on Imitation in Animals and Artifacts, Publisher: AISB, Pages: 69-76

    Coupling internal inverse and forward models gives rise to on-line simulation processes that may be used as a common computational substrate for action execution, planning, recognition, imitation and learning. In this paper, multiple coupled internal inverse and forward models are arranged in a hierarchical fashion, with each level of the hierarchy interacting with other levels through top-down and bottom-up processes. Through experiments involving imitation of a human demonstrator performing object manipulation tasks, this architecture is shown to equip a robot with a multi-level motor abstraction capability. This is then used to solve the correspondence problem in action recognition. The architecture is inspired by biological evidence.

  • Conference paper
    Khadhouri B, Demiris Y, 2005,

    Attention shifts during action sequence recognition for social robots

    , New York, 12th international conference on advanced robotics, 17 - 20 July 2005, Seattle, WA, Publisher: Ieee, Pages: 468-475
  • Book
    Demiris Y, Dautenhahn K, Nehaniv C, 2005,

    AISB'05: Social Intelligence and Interaction in animals, robots and agents: proceedings of the 3rd international symposium on imitation in animals and artifacts, University of Hertfordshire, Hatfield, UK, 12 - 15 April 2005

    , Publisher: SSAISB
  • Conference paper
    Simmons G, Demiris Y, 2004,

    Imitation of human demonstration using a biologically inspired modular optimal control scheme

    , New York, IEEE/RAS International Conference on Humanoid Robots, Publisher: IEEE, Pages: 215-234

    Download Citation Email Print Request Permissions Save to ProjectProgress in the field of humanoid robotics and the need to find simpler ways to program such robots has prompted research into computational models for robotic learning from human demonstration. To further investigate biologically inspired human-like robotic movement and imitation, we have constructed a framework based on three key features of human movement and planning: optimality, modularity and learning. In this paper we describe a computational motor system, based on the minimum variance model of human movement, that uses optimality principles to produce human-like movement in a robot arm. Within this motor system different movements are represented in a modular structure. When the system observes a demonstrated movement, the motor system uses these modules to produce motor commands which are used to update an internal state representation. This is used so that the system can recognize known movements and move the robot arm accordingly, or extract key features from the demonstrated movement and use them to learn a new module. The active involvement of the motor system in the recognition and learning of observed movements has its theoretical basis in the direct matching hypothesis and the use of a model for human-like movement allows the system to learn from human demonstration.

  • Conference paper
    Simmons G, Demiris Y, 2004,

    Biologically inspired optimal robot arm control with signal-dependent noise

    , IEEE/RSJ International Conference on Intelligent Robots and Systems, Pages: 491-496

    Progress in the field of humanoid robotics and the need to find simpler ways to program such robots has prompted research into computational models for robotic learning from human demonstration. To further investigate biologically inspired human-like robotic movement and imitation, we have constructed a framework based on three key features of human movement and planning: optimality, modularity and learning. In this paper we focus on the application of optimality principles to the production of human-like movement by a robot arm. Among computational theories of human movement, the signal-dependent noise, or minimum variance, model was chosen as a biologically realistic control scheme to produce human-like movement. A well known optimal control algorithm, the linear quadratic regulator, was adapted to implement this model. The scheme was applied both in simulation and on a real robot arm, which demonstrated human-like movement profiles in a point-to-point reaching experiment.

  • Conference paper
    Johnson M, Demiris Y, 2004,

    Abstraction in Recognition to Solve the Correspondence Problem for Robot Imitation

    , Towards Autonomous Robotic Systems, TAROS 2004, Pages: 63-70

    A considerable part of the imitation problem is finding mechanisms that link the recognition of actions that are being demonstrated to the execution of the same actions by the imitator. In a situation where a human is instructing a robot, the problem is made more complicated by the difference in morphology. In this paper we present an imitation framework that allows a robot to recognise and imitate object-directed actions performed by a human demonstrator by solving the correspondence problem. The recognition is achieved using an abstraction mechanism that focuses on the features of the demonstration that are important to the imitator. The abstraction mechanism is applied to experimental scenarios in which a robot imitates human- demonstrated tasks of transporting objects be- tween tables.

  • Journal article
    Demiris Y, Johnson M, 2003,

    Distributed, predictive perception of actions: a biologically inspired robotics architecture for imitation and learning

    , Connection Science, Vol: 15, Pages: 231-243, ISSN: 0954-0091

    One of the most important abilities for an agent's cognitive development in a social environment is the ability to recognize and imitate actions of others. In this paper we describe a cognitive architecture for action recognition and imitation, and present experiments demonstrating its implementation in robots. Inspired by neuroscientific and psychological data, and adopting a ‘simulation theory of mind’ approach, the architecture uses the motor systems of the imitator in a dual role, both for generating actions, and for understanding actions when performed by others. It consists of a distributed system of inverse and forward models that uses prediction accuracy as a means to classify demonstrated actions. The architecture is also shown to be capable of learning new composite actions from demonstration.

  • Conference paper
    Eneje E, Demiris Y, 2003,

    Towards Robot Intermodal Matching Using Spiking Neurons

    , IROS'03 Workshop on Programming by Demonstration, Pages: 95-99

    For a robot to successfully learn from demonstration it must posses the ability to reproduce the actions of a teacher. For this to happen, the robot must generate motor signals to match its proprioceptively perceived state with that of the visually perceived state of a teacher. In this paper we describe a real time matching model at a neural level of description. Experimental results from matching of arm movements, using dynamically simulated articulated robots, are presented.

  • Conference paper
    Johnson M, Demiris Y, 2003,

    An integrated rapid development environment for computer-aided robot design and simulation

    , Bury St Edmunds, International Conference on Mechatronics, ICOM 2003, Publisher: Wiley, Pages: 485-490

    We present our work towards the development of a rapid prototyping integrated environment for the design and dynamical simulation of multibody robotic systems. Subsequently, we demonstrate its current functionality in a case study involving the construction of a 130 DoF humanoid robot that attempts to closely match human motion capabilities. The modelling system relies exclusively on open-source software libraries thus offering high levels of customization and extensibility to the end-user.

  • Journal article
    Prince CG, Demiris Y, 2003,

    Editorial: Introduction to the special issue on epigenetic robotics

    , Adaptive Behaviour, Vol: 11, Pages: 75-77, ISSN: 1059-7123
  • Conference paper
    Demiris Y, 2002,

    Mirror neurons, imitation and the learning of movement sequences

    , Singapore, 9th international conference on neural information processing (ICONIP), Singapore, Singapore, 18 - 22 November 2002, Publisher: Nanyang Technological Univ, Pages: 111-115

    We draw inspiration from properties of "mirror" neurons discovered in the macaque monkey brain area F5, to design and implement a distributed behaviour-based architecture that equips robots with movement imitation abilities. We combine this generative route with a learning route, and demonstrate how new composite behaviours that exhibit mirror neuron like properties can be learned from demonstration.

  • Book chapter
    Demiris Y, Hayes G, 2002,

    Imitation as a dual-route process featuring predictive and learning components: a biologically plausible computational model

    , Imitation in animals and artifacts, Editors: Dautenhahn, Nehaniv, Cambridge, Massachussetts, Publisher: MIT Press, Pages: 327-361, ISBN: 9780262042031

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=559&limit=50&page=4&respub-action=search.html Current Millis: 1731120738982 Current Time: Sat Nov 09 02:52:18 GMT 2024