172 results found
Ros R, Nalin M, Wood R, et al., 2011, Child-robot interaction in the wild: Advice to the aspiring experimenter, Pages: 335-342
We present insights gleaned from a series of child-robot interaction experiments carried out in a hospital paediatric department. Our aim here is to share good practice in experimental design and lessons learned about the implementation of systems for social HRI with child users towards application in "the wild", rather than in tightly controlled and constrained laboratory environments: a trade-off between the structures imposed by experimental design and the desire for removal of such constraints that inhibit interaction depth, and hence engagement, requires a careful balance. © 2011 ACM.
Sarabia M, Ros R, Demiris Y, 2011, Towards an open-source social middleware for humanoid robots, International Conference on Humanoid Robotics, Publisher: IEEE, Pages: 670-675
Recent examples of robotics middleware including YARP, ROS, and NaoQi, have greatly enhanced the standardisation, interoperability and rapid development of robotics application software. In this paper, we present our research towards an open source middleware to support the development of social robotic applications. In the core of the ability of a robot to interact socially are algorithms to perceive the actions and intentions of a human user. We attempt to provide a computational layer to standardise these algorithms utilising a bioinspired computational architecture known as HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) and demonstrate the deployment of such layer on two different humanoid platforms, the Nao and iCub robots. We use a dance interaction scenario to demonstrate the utility of the framework.
Soh H, Demiris Y, 2011, Evolving Policies for Multi-Reward Partially Observable Markov Decision Processes (MR-POMDPs), Genetic and Evolutionary Computation Conference (GECCO), Publisher: ACM, Pages: 713-720
Plans and decisions in many real-world scenarios are made under uncertainty and to satisfy multiple, possibly conflicting, objectives. In this work, we contribute the multi-reward partially-observable Markov decision process (MR-POMDP) as a general modelling framework. To solve MR-POMDPs, we present two hybrid (memetic) multi-objective evolutionary algorithms that generate non-dominated sets of policies (in the form of stochastic finite state controllers). Performance comparisons between the methods on multi-objective problems in robotics (with 2, 3 and 5 objectives), web-advertising (with 3, 4 and 5 objectives) and infectious disease control (with 3 objectives), revealed that memetic variants outperformed their original counterparts. We anticipate that the MR-POMDP along with multi-objective evolutionary solvers will prove useful in a variety of theoretical and real-world applications.
Soh H, Demiris Y, 2011, Multi-reward policies for medical applications: anthrax attacks and smart wheelchairs., Publisher: ACM, Pages: 471-478
Soh H, Demiris Y, 2011, Involving Young Children in the Design of a Safe, Smart Paediatric Wheelchair, HRI Pioneers Workshop, Pages: 86-87
Independent mobility is crucial for a growing child and its loss can severely impact cognitive, emotional and social development. Unfortunately, powered wheelchair provision for young children has been difficult due to safety concerns. But powered mobility need not be unsafe. Risks can be reduced through the use of robotic technology (e.g., obstacle avoidance) and we present a prototype safe smart paediatric wheelchair: the Assistive Robot Transport for Youngsters (ARTY). A core aspect of our work is that we aim to bring ARTY to the field and we discuss the challenges faced when trying to involve children in the development/testing of medical technology. We discuss one preliminary experiment designed as a “Hide-and-Seek” game as a short case study.
Wu Y, Demiris Y, 2011, Learning Dynamical Representations of Tools for Tool-Use Recognition, International Conference on Robotics and Biomimetics (ROBIO), Publisher: IEEE, Pages: 2664-2669
We consider the problem of representing and recognising tools, a subset of objects that have special functionality and action patterns. Our proposed framework is based on the biological evidence of hierarchical representation of tools in the region of the human cortex that generates action semantics. It addresses the shortfalls of traditional learning models of object representation applied on tools. To showcase its merits, this framework is implemented as a hybrid model between the Hierarchical Attentive Multiple Models for Execution and Recognition of Actions Architecture (HAMMER) and Hidden Markov Model (HMM) to recognise and describe tools as dynamic patterns at symbolic level. The implemented model is tested and validated on two sets of experiments of 50 human demonstrations each on using 5 different tools. In the experiment with precise and accurate input data, the cross-validation statistics suggest very robust identification of the learned tools. In the experiment with unstructured environment, all errors can be explained systematically.
Butler S, Demiris Y, 2010, Partial observability during predictions of the opponent's movements in an RTS game, Symposium on Computational Intelligence and Games (CIG), Publisher: IEEE, Pages: 46-53
In RTS-style games it is important to be able to predict the movements of the opponent's forces to have the best chance of performing appropriate counter-moves. Resorting to using perfect global state information is generally considered to be `cheating' by the player, so to perform such predictions scouts (or observers) must be used to gather information. This means being in the right place at the right time to observe the opponent. In this paper we show the effect of imposing partial observability onto an RTS game with regard to making predictions, and we compare two different mechanisms that decide where best to direct the attention of the observers to maximise the benefit of predictions.
Butler S, Demiris Y, 2010, Using a Cognitive Architecture for Opponent Target Prediction, AISB'10: International Symposium on AI & Games, Publisher: AISB, Pages: 55-62
One of the most important aspects of a compelling game AI is that it anticipates the player’s actions and responds to them in a convincing manner. The first step towards doing this is to understand what the player is doing and predict their possible future actions. In this paper we show an approach where the AI system focusses on testing hypotheses made about the player’s actions using an implementation of a cognitive architecture inspired by the simulation theory of mind. The application used in this paper is to predict the target that the player is heading towards, in an RTS-style game. We improve the prediction accuracy and reduce the number of hypotheses needed by using path planning and path clustering.
Carlson T, Demiris Y, 2010, Increasing Robotic Wheelchair Safety With Collaborative Control: Evidence from Secondary Task Experiments, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 5582-5587, ISSN: 1050-4729
Martins MF, Demiris Y, 2010, Learning multirobot joint action plans from simultaneous task execution demonstrations., International Conference on Autonomous Agents and Multiagent Systems, Publisher: ACM, Pages: 931-938
The central problem of designing intelligent robot systems which learn by demonstrations of desired behaviour has been largely studied within the field of robotics. Numerous architectures for action recognition and prediction of intent of a single teacher have been proposed. However, little work has been done addressing how a group of robots can learn by simultaneous demonstrations of multiple teachers.This paper contributes a novel approach for learning multirobot joint action plans from unlabelled data. The robots firstly learn the demonstrated sequence of individual actions using the HAMMER architecture. Subsequently, the group behaviour is segmented over time and space by applying a spatio-temporal clustering algorithm.The experimental results, in which humans teleoperated real robots during a search and rescue task deployment, successfully demonstrated the efficacy of combining action recognition at individual level with group behaviour segmentation, spotting the exact moment when robots must form coalitions to achieve the goal, thus yielding reasonable generation of multirobot joint action plans.
Martins MF, Demiris Y, 2010, Impact of Human Communication in a Multi-teacher, Multi-robot Learning by Demonstration System., AAMAS'10 Workshop on Agents Learning Interactively from Human Teachers
A wide range of architectures have been proposed within the areas of learning by demonstration and multi-robot coordination. These areas share a common issue: how humans and robots share information and knowledge among themselves. This paper analyses the impact of communication between human teachers during simultaneous demonstration of task execution in the novel Multi-robot Learning by Demonstration domain, using the MRLbD architecture. The performance is analysed in terms of time to task completion, as well as the quality of the multi-robot joint action plans. Participants with different levels of skills taught real robots solutions for a furniture moving task through teleoperation. The experimental results provided evidence that explicit communication between teachers does not necessarily reduce the time to complete a task, but contributes to the synchronisation of manoeuvres, thus enhancing the quality of the joint action plans generated by the MRLbD architecture.
Pitt J, Demiris Y, Polak J, 2010, Converging Bio-inspired Robotics and Socio-inspired Agents for Intelligent Transportation Systems, 9th International Conference on Artificial Immune Systems (ICARIS 2010), Publisher: SPRINGER-VERLAG BERLIN, Pages: 304-+, ISSN: 0302-9743
Takacs B, Demiris Y, 2010, Spectral clustering in multi-agent systems, Knowledge and Information Systems, Vol: 25, Pages: 607-622, ISSN: 0219-1377
We examine the application of spectral clustering for breaking up the behavior of a multi-agent system in space and time into smaller, independent elements. We propose clustering observations of individual entities in order to identify significant changes in the parameter space (like spatial position) and detect temporal alterations of behavior within the same framework. Available knowledge of important interactions (events) between entities is also considered. We describe a novel algorithm utilizing iterative subdivisions where clusters are pre-processed at each step to counter spatial scaling, rotation, replay speed, and varying sampling frequency. A method is presented to balance spatial and temporal segmentation based on the expected group size, and a validity measure is introduced to determine the optimal number of clusters. We demonstrate our results by analyzing the outcomes of computer games and compare our algorithm to K-means and traditional spectral clustering.
Wu Y, Demiris Y, 2010, Towards One Shot Learning by Imitation for Humanoid Robots, International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 2889-2894, ISSN: 1050-4729
Teaching a robot to learn new knowledge is a repetitive and tedious process. In order to accelerate the process, we propose a novel template-based approach for robot arm movement imitation. This algorithm selects a previously observed path demonstrated by a human and generates a path in a novel situation based on pairwise mapping of invariant feature locations present in both the demonstrated and the new scenes using a combination of minimum distortion and minimum energy strategies. This One-Shot Learning algorithm is capable of not only mapping simple point-to-point paths but also adapting to more complex tasks such as those involving forced waypoints. As compared to traditional methodologies, our work require neither extensive training for generalisation nor expensive run-time computation for accuracy. This algorithm has been statistically validated using cross-validation of grasping experiments as well as tested for practical implementation on the iCub humanoid robot for playing the tic-tac-toe game.
Wu Y, Demiris Y, 2010, Hierarchical Learning Approach for One-shot Action Imitation in Humanoid Robots, International Conference on Control, Automation, Robotics and Vision (ICARCV), Publisher: IEEE, Pages: 453-458
We consider the issue of segmenting an action in the learning phase into a logical set of smaller primitives in order to construct a generative model for imitation learning using a hierarchical approach. Our proposed framework, addressing the “how-to” question in imitation, is based on a one-shot imitation learning algorithm. It incorporates segmentation of a demonstrated template into a series of subactions and takes a hierarchical approach to generate the task action by using a finite state machine in a generative way. Two sets of experiments have been conducted to evaluate the performance of the framework, both statistically and in practice, through playing a tic-tac-toe game. The experiments demonstrate that the proposed framework can effectively improve the performance of the one-shot learning algorithm and reduce the size of primitive space, without compromising the learning quality.
Theremin is an electronic musical instrument considered to be the most difficult to play which requires the player's hands to have high precision and stability as any position change within proximity of the instrument's antennae can make a difference to the pitch or volume. In a different direction to previous developments of Theremin playing robots, we propose a Humanoid Thereminist System that goes beyond using only one degree of freedom which will open up the possibility for robot to acquire more complex skills, such as aerial fingering and include musical expressions in playing the Theremin. The proposed system consists of two phases, namely calibration phase and playing phase which can be executed independently. During the playing phase, the System takes input from a MIDI file and performs path planning using a combination of minimum energy strategy in joint space and feedback error correction for next playing note. Three experiments have been conducted to evaluate the developed system quantitatively and qualitatively by playing a selection of music files. The experiments have demonstrated that the proposed system can effectively utilise multiple degrees of freedoms while maintaining minimum pitch error margins.
Butler S, Demiris Y, 2009, Predicting the Movements of Robot Teams Using Generative Models, International Symposium on Distributed Autonomous Robotic Systems (DARS), Publisher: Springer, Pages: 533-542
When a robot plans its actions within an environment containing multiple robots, it is often necessary to take into account the actions and movements of the other robots to either avoid, counter, or cooperate with them, depending on the scenario. Our predictive system is based on the biologically-inspired, simulation theoretic approach that uses internal generative models in single-robot applications. Here, we move beyond the single-robot case to illustrate how these generative models can predict the movements of the opponent’s robots, when applied to an adversarial scenario involving two robot teams. The system is able to recognise whether the robots are attacking or defending, and the formation they are moving in. It can then predict their future movements based on the recognised model. The results confirm that the speed of recognition and the accuracy of prediction depend on how well the models match the robots’ observed behaviour.
Carlson T, Demiris Y, 2009, Using Visual Attention to Evaluate Collaborative Control Architectures for Human Robot Interaction, AISB'09: New Frontiers in Human-Robot Interaction
Collaborative control architectures assist human users in performing tasks, without undermining their capabilities or curtailing the natural development of their skills. In this study, we evaluate our collaborative control architecture by investigating the visual attention patterns of robotic wheelchair users. Our initial hypothesis stated that the user would require less visual attention for driving, whilst they are being assisted by the collaborative system, thus allowing them to concentrate on higher level cognitive tasks, such as planning. However, our analysis of eye gaze patterns—as recorded by ahead mounted eye tracking system—supports the opposite conclusion: that patterns of saccadic activation increase and become more chaotic under the assisted mode. Our findings highlight the necessity for techniques that assist the user in forming an appropriate mental model of the collaborative control architecture.
Demiris Y, 2009, Knowing when to assist: developmental issues in lifelong assistive robotics., Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2009, Publisher: IEEE, Pages: 3357-3360, ISSN: 1557-170X
Children and adults with sensorimotor disabilities can significantly increase their autonomy through the use of assistive robots. As the field progresses from short-term, task-specific solutions to long-term, adaptive ones, new challenges are emerging. In this paper a lifelong methodological approach is presented, that attempts to balance the immediate context-specific needs of the user, with the long-term effects that the robot's assistance can potentially have on the user's developmental trajectory.
Demiris Y, Carlson T, 2009, Lifelong robot-assisted mobility: models, tools, and challenges, IET Conference on Assisted Living 2009, Publisher: IET
Increasing the autonomy of users with disabilities through robot-assisted mobility has the potential of facilitating their sensorimotor and social development, as well as reducing the burden of caring for such populations in both inpatient and outpatient settings. While techniques for task-specific assistance exist, they are largely focused on satisfying short- term goals, utilising stationary user models. For lifelong users and particularly for those with rapidly changing sensorimotor skills (for example very young children), adaptive models that take into consideration these developmental trajectories are becoming very important. In this paper, we present our approach to lifelong user models for robot-assisted mobility, and discuss existing models and tools, as well as challenges that remain ahead.
Takács B, Demiris Y, 2009, Multi-robot plan adaptation by constrained minimal distortion feature mapping., Publisher: IEEE, Pages: 742-749
Tidemann A, Ozturk P, Demiris Y, 2009, A Groovy Virtual Drumming Agent, 9th International Conference on Intelligent Virtual Agents, Publisher: SPRINGER-VERLAG BERLIN, Pages: 104-+, ISSN: 0302-9743
Wu Y, Demiris Y, 2009, Efficient Template-based Path Imitation by Invariant Feature Mapping, International Conference on Robotics and Biomimetics (ROBIO), Publisher: IEEE, Pages: 913-918
We propose a novel approach for robot movement imitation that is suitable for robotic arm movement in tasks such as reaching and grasping. This algorithm selects a previously observed path demonstrated by an agent and generates a path in a novel situation based on pairwise mapping of invariant feature locations present in both the demonstrated and the new scenes using minimum distortion and minimum energy strategies. This One-Shot Learning algorithm is capable of not only mapping simple point-to-point paths but also adapting to more complex tasks such as involvement of forced waypoints. As compared to traditional methodologies, our work does not require extensive training for generalisation as well as expensive run-time computation for accuracy. Cross-validation statistics of grasping experiments show great similarity between the paths produced by human subjects and the proposed algorithm.
Carlson T, Demiris Y, 2008, Human-wheelchair collaboration through prediction of intention and adaptive assistance, IEEE International Conference on Robotics and Automation, Publisher: IEEE, Pages: 3926-3931, ISSN: 1050-4729
Demiris Y, Khadhouri B, 2008, Content-based control of goal-directed attention during human action perception, Interaction Studies, Vol: 9, Pages: 353-376, ISSN: 1572-0373
During the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) (Demiris and Khadhouri, 2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot's resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses' requests, with respect to the content's reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes.
Demiris Y, Khadhouri B, 2008, Content-based control of goal-directed attention during human action perception, Interaction Studies: social behaviour and communication in biological and artificial systems, Vol: 9, Pages: 353-376
Demiris Y, Meltzoff A, 2008, The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots., Infant and Child Development, Vol: 17, Pages: 43-53, ISSN: 1522-7227
Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.
Fong T, Dautenhahn K, Scheutz M, et al., 2008, The Third International Conference on Human-Robot Interaction., AI Magazine, Vol: 29, Pages: 77-78
Sastoque JCM, Rovira JLP, Lima ERD, et al., 2008, A Hybrid Method based on Fuzzy Inference and Non-Linear Oscillators for Real-Time Control of Gait., Publisher: INSTICC - Institute for Systems and Technologies of Information, Control and Communication, Pages: 44-51
Takacs B, Demiris Y, 2008, Balancing Spectral Clustering for Segmenting Spatio-Temporal Observations of Multi-Agent Systems, 8th IEEE International Conference on Data Mining, Publisher: IEEE COMPUTER SOC, Pages: 580-587, ISSN: 1550-4786
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.