Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

169 results found

Lee K, Ognibene D, Chang HJ, Kim T-K, Demiris Yet al., 2015, STARE: Spatio-Temporal Attention Relocation for Multiple Structured Activities Detection, IEEE TRANSACTIONS ON IMAGE PROCESSING, Vol: 24, ISSN: 1057-7149

JOURNAL ARTICLE

Ribes A, Cerquides J, Demiris Y, Lopez de Mantaras Ret al., 2015, Where is my keyboard? Model-based active adaptation of action-space in a humanoid robot, 15th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Publisher: IEEE, Pages: 602-609, ISSN: 2164-0572

CONFERENCE PAPER

Sarabia M, Lee K, Demiris Y, 2015, Towards a Synchronised Grammars Framework for Adaptive Musical Human-Robot Collaboration, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Publisher: IEEE, Pages: 715-721

We present an adaptive musical collaboration framework for interaction between a human and a robot. The aim of our work is to develop a system that receives feedback from the user in real time and learns the music progression style of the user over time. To tackle this problem, we represent a song as a hierarchically structured sequence of music primitives. By exploiting the sequential constraints of these primitives inferred from the structural information combined with user feedback, we show that a robot can play music in accordance with the user’s anticipated actions. We use Stochastic Context-Free Grammars augmented with the knowledge of the learnt user’s preferences.We provide synthetic experiments as well as a pilot study with a Baxter robot and a tangible music table. The synthetic results show the synchronisation and adaptivity features of our framework and the pilot study suggest these are applicable to create an effective musical collaboration experience.

CONFERENCE PAPER

Soh H, Demiris Y, 2015, Spatio-Temporal Learning With the Online Finite and Infinite Echo-State Gaussian Processes, IEEE Transactions on Neural Networks and Learning Systems, Vol: 26, Pages: 522-536, ISSN: 2162-237X

Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair.

JOURNAL ARTICLE

Zambelli M, Demiris Y, 2015, Online Ensemble Learning of Sensorimotor Contingencies, Workshop on Sensorimotor Contingencies For Robotics at IROS

Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter.

CONFERENCE PAPER

Demiris Y, Aziz-Zadeh L, Bonaiuto J, 2014, Information Processing in the Mirror Neuron System in Primates and Machines, Neuroinformatics, Vol: 12, Pages: 63-91, ISSN: 1539-2791

The mirror neuron system in primates matches observations of actions with the motor representations used for their execution, and is a topic of intense research and debate in biological and computational disciplines. In robotics, models of this system have been used for enabling robots to imitate and learn how to perform tasks from human demonstrations. Yet, existing computational and robotic models of these systems are found in multiple levels of description, and although some models offer plausible explanations and testable predictions, the difference in the granularity of the experimental setups, methodologies, computational structures and selected modeled data make principled meta-analyses, common in other fields, difficult. In this paper, we adopt an interdisciplinary approach, using the BODB integrated environment in order to bring together several different but complementary computational models, by functionally decomposing them into brain operating principles (BOPs) which each capture a limited subset of the model’s functionality. We then explore links from these BOPs to neuroimaging and neurophysiological data in order to pinpoint complementary and conflicting explanations and compare predictions against selected sets of neurobiological data. The results of this comparison are used to interpret mirror system neuroimaging results in terms of neural network activity, evaluate the biological plausibility of mirror system models, and suggest new experiments that can shed light on the neural basis of mirror systems.

JOURNAL ARTICLE

Ros R, Baroni I, Demiris Y, 2014, Adaptive human-robot interaction in sensorimotor task instruction: From human to robot dance tutors, Robotics and Autonomous Systems, Vol: 62, Pages: 707-720, ISSN: 1872-793X

We explore the potential for humanoid robots to interact with children in a dance activity. In this context, the robot plays the role of an instructor to guide the child through several dance moves to learn a dance phrase. We participated in 30 dance sessions in schools to study human–human interaction between children and a human dance teacher, and to identify the applied methodologies. Based on the strategies observed, both social and task-dependent, we implemented a robotic system capable of autonomously instructing dance sequences to children while displaying basic social cues to engage the child in the task. Experiments were performed in a hospital with the Nao robot interacting with 12 children through multiple encounters, when possible (18 sessions, 236 min). Observational analysis through video recordings and survey evaluations were used to assess the quality of interaction. Moreover, we introduce an involvement measure based on the aggregation of observed behavioral cues to assess the level of interest in the interaction through time. The analysis revealed high levels of involvement, while highlighting the need for further research into social engagement and adaptation with robots over repeated sessions.

JOURNAL ARTICLE

Ros R, Coninx A, Demiris Y, Patsis G, Enescu V, Sahli Het al., 2014, Behavioral Accommodation towards a Dance Robot Tutor, International Conference on Human-Robot Interaction, Publisher: ACM/IEEE, Pages: 278-279

We report first results on children adaptive behavior towards a dance tutoring robot. We can observe that children behavior rapidly evolves through few sessions in order to accommodate with the robotic tutor rhythm and instructions.

CONFERENCE PAPER

Soh H, Demiris Y, 2014, Incrementally Learning Objects by Touch: Online Discriminative and Generative Models for Tactile-Based Recognition, IEEE Transactions on Haptics, Vol: 7, Pages: 512-525, ISSN: 1939-1412

Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/ palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate “early” classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.

JOURNAL ARTICLE

Su Y, Dong W, Wu Y, Du Z, Demiris Yet al., 2014, Increasing the Accuracy and the Repeatability of Position Control for Micromanipulations Using Heteroscedastic Gaussian Processes, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4692-4698, ISSN: 1050-4729

CONFERENCE PAPER

Wu Y, Su Y, Demiris Y, 2014, A morphable template framework for robot learning by demonstration: Integrating one-shot and incremental learning approaches, Robotics and Autonomous Systems, Vol: 62, Pages: 1517-1530

Robot learning by demonstration is key to bringing robots into daily social environments to interact with and learn from human and other agents. However, teaching a robot to acquire new knowledge is a tedious and repetitive process and often restrictive to a specific setup of the environment. We propose a template-based learning framework for robot learning by demonstration to address both generalisation and adaptability. This novel framework is based upon a one-shot learning model integrated with spectral clustering and an online learning model to learn and adapt actions in similar scenarios. A set of statistical experiments is used to benchmark the framework components and shows that this approach requires no extensive training for generalisation and can adapt to environmental changes flexibly. Two real-world applications of an iCub humanoid robot playing the tic-tac-toe game and soldering a circuit board are used to demonstrate the relative merits of the framework.

JOURNAL ARTICLE

Belpaeme T, Baxter PE, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V, Looije R, Neerincx M, Demiris Y, Ros-Espinoza R, Beck A, Cañamero L, Hiolle A, Lewis M, Baroni I, Nalin M, Cosi P, Paci G, Tesser F, Sommavilla G, Humbert Ret al., 2013, Multimodal Child-Robot Interaction: Building Social Bonds, Journal of Human-Robot Interaction, Vol: 1, Pages: 33-53

For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competen- cies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.

JOURNAL ARTICLE

Chatzis S, Demiris Y, 2013, The Infinite-Order Conditional Random Field Model for Sequential Data Modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 6, Pages: 1523-1534, ISSN: 0162-8828

Sequential data labeling is a fundamental task in machine learning applications, with speech and natural language processing, activity recognition in video sequences, and biomedical data analysis being characteristic examples, to name just a few. The conditional random field (CRF), a log-linear model representing the conditional distribution of the observation labels, is one of the most successful approaches for sequential data labeling and classification, and has lately received significant attention in machine learning as it achieves superb prediction performance in a variety of scenarios. Nevertheless, existing CRF formulations can capture only one- or few-timestep interactions and neglect higher order dependences, which are potentially useful in many real-life sequential data modeling applications. To resolve these issues, in this paper we introduce a novel CRF formulation, based on the postulation of an energy function which entails infinitely long time-dependences between the modeled data. Building blocks of our novel approach are: 1) the sequence memoizer (SM), a recently proposed nonparametric Bayesian approach for modeling label sequences with infinitely long time dependences, and 2) a mean-field-like approximation of the model marginal likelihood, which allows for the derivation of computationally efficient inference algorithms for our model. The efficacy of the so-obtained infinite-order CRF model is experimentally demonstrated.

JOURNAL ARTICLE

Chinellato E, Ognibene D, Sartori L, Demiris Yet al., 2013, Time to change: Deciding when to switch action plans during a social interaction, Pages: 47-58, ISSN: 0302-9743

Building on the extensive cognitive science literature on the subject, this paper introduces a model of the brain mechanisms underlying social interactions in humans and other primates. The fundamental components of the model are the "Action Observation" and "Action Planning" Systems, dedicated respectively to interpreting/recognizing the partner's movements and to plan actions suited to achieve certain goals. We have implemented a version of the model including reaching and grasping actions, and tuned on real experimental data coming from human psychophysical studies. The system is able to automatically detect the switching point in which the Action Planning System takes control over the Action Observation System, overriding the automatic imitation behaviour with a complementary social response. With such computational implementation we aim at validating the model and also at endowing an artificial agent with the ability of performing meaningful complementary responses to observed actions in social scenarios. © 2013 Springer-Verlag Berlin Heidelberg.

CONFERENCE PAPER

Korkinof D, Demiris Y, 2013, Online Quantum Mixture Regression for Trajectory Learning by Demonstration, IROS 2013, Publisher: IEEE, Pages: 3222-3229

In this work, we present the online Quantum Mixture Model (oQMM), which combines the merits of quan- tum mechanics and stochastic optimization. More specifically it allows for quantum effects on the mixture states, which in turn become a superposition of conventional mixture states. We propose an efficient stochastic online learning algorithm based on the online Expectation Maximization (EM), as well as a generation and decay scheme for model components. Our method is suitable for complex robotic applications, where data is abundant or where we wish to iteratively refine our model and conduct predictions during the course of learning. With a synthetic example, we show that the algorithm can achieve higher numerical stability. We also empirically demonstrate the efficacy of our method in well-known regression benchmark datasets. Under a trajectory Learning by Demonstration setting we employ a multi-shot learning application in joint angle space, where we observe higher quality of learning and reproduction. We compare against popular and well-established methods, widely adopted across the robotics community.

CONFERENCE PAPER

Korkinof D, Demiris Y, 2013, Online Quantum Mixture Regression for Trajectory Learning by Demonstration, International Conference on Intelligent Systems and Robots (IROS), Publisher: IEEE, Pages: 3222-3229, ISSN: 2153-0858

In this work, we present the online Quantum Mixture Model (oQMM), which combines the merits of quantum mechanics and stochastic optimization. More specifically it allows for quantum effects on the mixture states, which in turn become a superposition of conventional mixture states. We propose an efficient stochastic online learning algorithm based on the online Expectation Maximization (EM), as well as a generation and decay scheme for model components. Our method is suitable for complex robotic applications, where data is abundant or where we wish to iteratively refine our model and conduct predictions during the course of learning. With a synthetic example, we show that the algorithm can achieve higher numerical stability. We also empirically demonstrate the efficacy of our method in well-known regression benchmark datasets. Under a trajectory Learning by Demonstration setting we employ a multi-shot learning application in joint angle space, where we observe higher quality of learning and reproduction. We compare against popular and well-established methods, widely adopted across the robotics community.

CONFERENCE PAPER

Lee K, Su Y, Kim T-K, Demiris Yet al., 2013, A syntactic approach to robot imitation learning using probabilistic activity grammars, Robotics and Autonomous Systems, Vol: 61, Pages: 1323-1334, ISSN: 0921-8890

This paper describes a syntactic approach to imitation learning that captures important task structures in the form of probabilistic activity grammars from a reasonably small number of samples under noisy conditions. We show that these learned grammars can be recursively applied to help recognize unforeseen, more complicated tasks that share underlying structures. The grammars enforce an observation to be consistent with the previously observed behaviors which can correct unexpected, out-of-context actions due to errors of the observer and/or demonstrator. To achieve this goal, our method (1) actively searches for frequently occurring action symbols that are subsets of input samples to uncover the hierarchical structure of the demonstration, and (2) considers the uncertainties of input symbols due to imperfect low-level detectors.We evaluate the proposed method using both synthetic data and two sets of real-world humanoid robot experiments. In our Towers of Hanoi experiment, the robot learns the important constraints of the puzzle after observing demonstrators solving it. In our Dance Imitation experiment, the robot learns 3 types of dances from human demonstrations. The results suggest that under reasonable amount of noise, our method is capable of capturing the reusable task structures and generalizing them to cope with recursions.

JOURNAL ARTICLE

Ognibene D, Chinellato E, Sarabia M, Demiris Yet al., 2013, Contextual action recognition and target localization with an active allocation of attention on a humanoid robot, Bioinspiration & Biomimetics, Vol: 8

Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for a dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partner's reaching movement, to contextually estimate the goal position of the partner's hand and the location in space of the candidate targets. This is done while actively gazing around the environment, with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control, based on the internal simulation of actions, provides a relevant advantage with respect to other action perception approaches, both in terms of estimation precision and of time required to recognize an action. Moreover, our model reproduces and extends some experimental results on human attention during an action perception.

JOURNAL ARTICLE

Ognibene D, Demiris Y, 2013, Towards Active Event Recognition., Publisher: IJCAI/AAAI

CONFERENCE PAPER

Ognibene D, Demiris Y, 2013, Towards Active Event Recognition, International Joint Conference on Artificial Intelligence (IJCAI), Publisher: AIII Press, Pages: 2495-2501

Directing robot attention to recognise activities and to anticipate events like goal-directed actions is a crucial skill for human-robot interaction. Unfortunately, issues like intrinsic time constraints, the spatially distributed nature of the entailed information sources, and the existence of a multitude of unobservable states affecting the system, like latent intentions, have long rendered achievement of such skills a rather elusive goal. The problem tests the limits of current attention control systems. It requires an integrated solution for tracking, exploration and recognition, which traditionally have been seen as separate problems in active vision.We propose a probabilistic generative framework based on a mixture of Kalman filters and information gain maximisation that uses predictions in both recognition and attention-control. This framework can efficiently use the observations of one element in a dynamic environment to provide information on other elements, and consequently enables guided exploration.Interestingly, the sensors-control policy, directly derived from first principles, represents the intuitive trade-off between finding the most discriminative clues and maintaining overall awareness.Experiments on a simulated humanoid robot observing a human executing goal-oriented actions demonstrated improvement on recognition time and precision over baseline systems.

CONFERENCE PAPER

Ognibene D, Wu Y, Lee K, Demiris Yet al., 2013, Hierarchies for embodied action perception, Computational and Robotic Models of the Hierarchical Organization of Behavior, Editors: Baldassarre, Mirolli, Publisher: Springer, Pages: 81-98

During social interactions, humans are capable of initiating and responding to rich and complex social actions despite having incomplete world knowledge, and physical, perceptual and computational constraints. This capability relies on action perception mechanisms that exploit regularities in observed goal-oriented behaviours to generate robust predictions and reduce the workload of sensing systems. To achieve this essential capability, we argue that the following three factors are fundamental. First, human knowledge is frequently hierarchically structured, both in the perceptual and execution domains. Second, human perception is an active process driven by current task requirements and context; this is particularly important when the perceptual input is complex (e.g. human motion) and the agent has to operate under embodiment constraints. Third, learning is at the heart of action perception mechanisms, underlying the agent’s ability to add new behaviours to its repertoire. Based on these factors, we review multiple instantiations of a hierarchically-organised biologically-inspired framework for embodied action perception, demonstrating its flexibility in addressing the rich computational contexts of action perception and learning in robotic platforms.

BOOK CHAPTER

Petit M, Lallée S, Boucher J-D, Pointeau G, Cheminade P, Ognibene D, Chinellato E, Pattacini U, Gori I, Martinez-Hernandez U, Barron-Gonzalez H, Inderbitzin M, Luvizotto A, Vouloutsi V, Demiris Y, Metta G, Dominey PFet al., 2013, The Coordinating Role of Language in Real-Time Multi-Modal Learning of Cooperative Tasks, IEEE Transactions on Autonomous Mental Development, Vol: 5, Pages: 3-17, ISSN: 1943-0604

One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”-which defines the interlaced actions of the two cooperating agents-in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.

JOURNAL ARTICLE

Ros R, Demiris Y, 2013, Creative Dance: An Approach for Social Interaction between Robots and Children, 4th International Workshop on Human Behavior Understanding (HBU), Publisher: Springer, Pages: 40-51, ISSN: 0302-9743

In this paper we discuss the potential of using a dance robot tutor with children in the context of creative dance to study child-robot interaction through several encounters. We have taken part of dance sessions in order to extract strategies and models to inspire and justify the design of a robot dance tutor. Moreover, we present implementation details and preliminary results on a pilot study to extract initial feedback to further improve and test our system with a broader children population.

CONFERENCE PAPER

Sarabia M, Demiris Y, 2013, A Humanoid Robot Companion for Wheelchair Users, International Conference on Social Robotics (ICSR), Publisher: Springer, Pages: 432-441

In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.

CONFERENCE PAPER

Sarabia M, Le Mau T, Soh H, Naruse S, Poon C, Liao Z, Tan KC, Lai ZJ, Demiris Yet al., 2013, iCharibot : Design and Field Trials of a Fundraising Robot, International Conference on Social Robotics (ICSR 2013), Publisher: Springer, Pages: 412-421

In this work, we address the problem of increasing charitable donations through a novel, engaging fundraising robot: the Imperial Charity Robot (iCharibot). To better understand how to engage passers-by, we conducted a field trial in outdoor locations at a busy area in London, spread across 9 sessions of 40 minutes each. During our experiments, iCharibot attracted 679 people and engaged with 386 individuals. Our results show that interactivity led to longer user engagement with the robot. Our data further suggests both saliency and interactivity led to an increase in the total donation amount. These findings should prove useful for future design of robotic fundraisers in particular and for social robots in general.

CONFERENCE PAPER

Soh H, Demiris Y, 2013, When and how to help: An iterative probabilistic model for learning assistance by demonstration, International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3230-3236, ISSN: 2153-0858

Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance — using both simulation and a real-world smart wheelchair platform — demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration.

CONFERENCE PAPER

Su Y, Wu Y, Soh H, Du Z, Demiris Yet al., 2013, Enhanced Kinematic Model for Dexterous Manipulation with an Underactuated Hand, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2493-2499, ISSN: 2153-0858

CONFERENCE PAPER

Carlson T, Demiris Y, 2012, Collaborative Control of a Robotic Wheelchair: Evaluation of Performance, Attention and Workload, IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, Vol: 42, Pages: 876-888, ISSN: 1083-4419

Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.

JOURNAL ARTICLE

Chatzis SP, Demiris Y, 2012, The copula echo state network, Pattern Recognition, Vol: 45, Pages: 570-577, ISSN: 0031-3203

Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple, computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. This paper studies the formulation of a class of copula-based semiparametric models for sequential data modeling, characterized by nonparametric marginal distributions modeled by postulating suitable echo state networks, and parametric copula functions that help capture all the scale-free temporal dependence of the modeled processes. We provide a simple algorithm for the data-driven estimation of the marginal distribution and the copula parameters of our model under the maximum-likelihood framework. We exhibit the merits of our approach by considering a number of applications; as we show, our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs, without significant compromises in the algorithm's computational efficiency.

JOURNAL ARTICLE

Chatzis SP, Demiris Y, 2012, A Sparse Nonparametric Hierarchical Bayesian Approach Towards Inductive Transfer for Preference Modeling, Expert Systems with Applications, Vol: 39, Pages: 7235-7246

In this paper, we present a novel methodology for preference learning based on the concept of inductive transfer. Specifically, we introduce a nonparametric hierarchical Bayesian multitask learning approach, based on the notion that human subjects may cluster together forming groups of individuals with similar preference rationale (but not identical preferences). Our approach is facilitated by the utilization of a Dirichlet process prior, which allows for the automatic inference of the most appropriate number of subject groups (clusters), as well as the employment of the automatic relevance determination (ARD) mechanism, giving rise to a sparse nature for our model, which significantly enhances its computational efficiency. We explore the efficacy of our novel approach by applying it to both a synthetic experiment and a real-world music recommendation application. As we show, our approach offers a significant enhancement in the effectiveness of knowledge transfer in statistical preference learning applications, being capable of correctly inferring the actual number of human subject groups in a modeled dataset, and limiting knowledge transfer only to subjects belonging to the same group (wherein knowledge transferability is more likely).

JOURNAL ARTICLE

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00333953&limit=30&person=true&page=2&respub-action=search.html