Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1014Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

199 results found

Fischer T, Demiris Y, 2019, Computational modelling of embodied visual perspective-taking, IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920

Humans are inherently social beings that benefit from their perceptional capability to embody another point of view, typically referred to as perspective-taking. Perspective-taking is an essential feature in our daily interactions and is pivotal for human development. However, much remains unknown about the precise mechanisms that underlie perspective-taking. Here we show that formalizing perspective-taking in a computational model can detail the embodied mechanisms employed by humans in perspective-taking. The model's main building block is a set of action primitives that are passed through a forward model. The model employs a process that selects a subset of action primitives to be passed through the forward model to reduce the response time. The model demonstrates results that mimic those captured by human data, including (i) response times differences caused by the angular disparity between the perspective-taker and the other agent, (ii) the impact of task-irrelevant body posture variations in perspective-taking, and (iii) differences in the perspective-taking strategy between individuals. Our results provide support for the hypothesis that perspective-taking is a mental simulation of the physical movements that are required to match another person's visual viewpoint. Furthermore, the model provides several testable predictions, including the prediction that forced early responses lead to an egocentric bias and that a selection process introduces dependencies between two consecutive trials. Our results indicate potential links between perspective-taking and other essential perceptional and cognitive mechanisms, such as active vision and autobiographical memories.

Journal article

Cortacero K, Fischer T, Demiris Y, 2019, RT-BENE: A Dataset and Baselines for Real-Time Blink Estimation in Natural Environments, IEEE International Conference on Computer Vision Workshops, Publisher: Institute of Electrical and Electronics Engineers Inc.

In recent years gaze estimation methods have made substantial progress, driven by the numerous application areas including human-robot interaction, visual attention estimation and foveated rendering for virtual reality headsets. However, many gaze estimation methods typically assume that the subject's eyes are open; for closed eyes, these methods provide irregular gaze estimates. Here, we address this assumption by first introducing a new open-sourced dataset with annotations of the eye-openness of more than 200,000 eye images, including more than 10,000 images where the eyes are closed. We further present baseline methods that allow for blink detection using convolutional neural networks. In extensive experiments, we show that the proposed baselines perform favourably in terms of precision and recall. We further incorporate our proposed RT-BENE baselines in the recently presented RT-GENE gaze estimation framework where it provides a real-time inference of the openness of the eyes. We argue that our work will benefit both gaze estimation and blink estimation methods, and we take steps towards unifying these methods.

Conference paper

Taniguchi T, Ugur E, Ogata T, Nagai T, Demiris Yet al., 2019, Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, FRONTIERS IN NEUROROBOTICS, Vol: 13, ISSN: 1662-5218

Journal article

Zambelli M, Cully A, Demiris Y, Multimodal representation models for prediction and control from partial information, Robotics and Autonomous Systems, ISSN: 0921-8890

Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

Journal article

Zhang F, Cully A, Demiris Y, 2019, Probabilistic real-time user posture tracking for personalized robot-assisted dressing, IEEE Transactions on Robotics, Vol: 35, Pages: 873-888, ISSN: 1552-3098

Robotic solutions to dressing assistance have the potential to provide tremendous support for elderly and disabled people. However, unexpected user movements may lead to dressing failures or even pose a risk to the user. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. In this paper, we propose a probabilistic tracking method using Bayesian networks in latent spaces, which fuses robot end-effector positions and force information to enable cameraless and real-time estimation of the user postures during dressing. The latent spaces are created before dressing by modeling the user movements with a Gaussian process latent variable model, taking the user’s movement limitations into account. We introduce a robot-assisted dressing system that combines our tracking method with hierarchical multitask control to minimize the force between the user and the robot. The experimental results demonstrate the robustness and accuracy of our tracking method. The proposed method enables the Baxter robot to provide personalized dressing assistance in putting on a sleeveless jacket for users with (simulated) upper-body impairments.

Journal article

Bagga S, Maurer B, Miller T, Quinlan L, Silvestri L, Wells D, Winqvist R, Zolotas M, Demiris Yet al., 2019, instruMentor: An Interactive Robot for Musical Instrument Tutoring, Towards Autonomous Robotic Systems Conference, Publisher: Springer International Publishing, Pages: 303-315, ISSN: 0302-9743

Conference paper

Zolotas M, Demiris Y, Towards explainable shared control using augmented reality, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

hared control plays a pivotal role in establishingeffective human-robot interactions. Traditional control-sharingmethods strive to complement a human's capabilities at safelycompleting a task, and thereby rely on users forming amental model of the expected robot behaviour. However, thesemethods can often bewilder or frustrate users whenever theiractions do not elicit the intended system response, forming amisalignment between the respective internal models of therobot and human. To resolve this model misalignment, weintroduce Explainable Shared Control as a paradigm in whichassistance and information feedback are jointly considered.Augmented reality is presented as an integral component ofthis paradigm, by visually unveiling the robot's inner workingsto human operators. Explainable Shared Control is instantiatedand tested for assistive navigation in a setup involving a roboticwheelchair and a Microsoft HoloLens with add-on eye tracking.Experimental results indicate that the introduced paradigmfacilitates transparent assistance by improving recovery timesfrom adverse events associated with model misalignment.

Conference paper

Wang R, Ciliberto C, Amadori P, Demiris Yet al., 2019, Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation, Thirty-sixth International Conference on Machine Learning, Publisher: Proceedings of International Conference on Machine Learning (ICML-2019)

We consider the problem of imitation learning from a finite set of experttrajectories, without access to reinforcement signals. The classical approachof extracting the expert's reward function via inverse reinforcement learning,followed by reinforcement learning is indirect and may be computationallyexpensive. Recent generative adversarial methods based on matching the policydistribution between the expert and the agent could be unstable duringtraining. We propose a new framework for imitation learning by estimating thesupport of the expert policy to compute a fixed reward function, which allowsus to re-frame imitation learning within the standard reinforcement learningsetting. We demonstrate the efficacy of our reward function on both discreteand continuous domains, achieving comparable or better performance than thestate of the art under different reinforcement learning algorithms.

Conference paper

Cully A, Demiris Y, 2019, Online knowledge level tracking with data-driven student models and collaborative filtering, IEEE Transactions on Knowledge and Data Engineering, ISSN: 1041-4347

Intelligent Tutoring Systems are promising tools for delivering optimal and personalised learning experiences to students. A key component for their personalisation is the student model, which infers the knowledge level of the students to balance the difficulty of the exercises. While important advances have been achieved, several challenges remain. In particular, the models should be able to track in real-time the evolution of the students' knowledge levels. These evolutions are likely to follow different profiles for each student, while measuring the exact knowledge level remains difficult given the limited and noisy information provided by the interactions. This paper introduces a novel model that addresses these challenges with three contributions: 1) the model relies on Gaussian Processes to track online the evolution of the student's knowledge level over time, 2) it uses collaborative filtering to rapidly provide long-term predictions by leveraging the information from previous users, and 3) it automatically generates abstract representations of knowledge components via automatic relevance determination of covariance matrices. The model is evaluated on three datasets, including real users. The results demonstrate that the model converges to accurate predictions in average 4 times faster than the compared methods.

Journal article

Zolotas M, Elsdon J, Demiris Y, 2019, Head-mounted augmented reality for explainable robotic wheelchair assistance, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

Robotic wheelchairs with built-in assistive fea-tures, such as shared control, are an emerging means ofproviding independent mobility to severely disabled individuals.However, patients often struggle to build a mental model oftheir wheelchair’s behaviour under different environmentalconditions. Motivated by the desire to help users bridge thisgap in perception, we propose a novel augmented realitysystem using a Microsoft Hololens as a head-mounted aid forwheelchair navigation. The system displays visual feedback tothe wearer as a way of explaining the underlying dynamicsof the wheelchair’s shared controller and its predicted futurestates. To investigate the influence of different interface designoptions, a pilot study was also conducted. We evaluated theacceptance rate and learning curve of an immersive wheelchairtraining regime, revealing preliminary insights into the potentialbeneficial and adverse nature of different augmented realitycues for assistive navigation. In particular, we demonstrate thatcare should be taken in the presentation of information, witheffort-reducing cues for augmented information acquisition (forexample, a rear-view display) being the most appreciated.

Conference paper

Di Veroli C, Le CA, Lemaire T, Makabu E, Nur A, Ooi V, Park JY, Sanna F, Chacon R, Demiris Yet al., 2019, LibRob: An autonomous assistive librarian, Pages: 15-26, ISBN: 9783030253318

© Springer Nature Switzerland AG 2019. This study explores how new robotic systems can help library users efficiently locate the book they require. A survey conducted among Imperial College students has shown an absence of a time-efficient and organised method to find the books they are looking for in the college library. The solution implemented, LibRob, is an automated assistive robot that gives guidance to the users in finding the book they are searching for in an interactive manner to deliver a more satisfactory experience. LibRob is able to process a search request either by speech or by text and return a list of relevant books by author, subject or title. Once the user selects the book of interest, LibRob guides them to the shelf containing the book, then returns to its base station on completion. Experimental results demonstrate that the robot reduces the time necessary to find a book by 47.4%, and left 80% of the users satisfied with their experience, proving that human-robot interactions can greatly improve the efficiency of basic activities within a library environment.

Book chapter

Fischer T, 2019, Perspective Taking in Robots: A Framework and Computational Model

Humans are inherently social beings that benefit from their perceptional capability to embody another point of view. This thesis examines this capability, termed perspective taking, using a mixed forward/reverse engineering approach. While previous approaches were limited to known, artificial environments, the proposed approach results in a perceptional framework that can be used in unconstrained environments while at the same time detailing the mechanisms that humans use to infer the world's characteristics from another viewpoint.First, the thesis explores a forward engineering approach by outlining the required perceptional components and implementing these components on a humanoid iCub robot. Prior to and during the perspective taking, the iCub learns the environment and recognizes its constituent objects before approximating the gaze of surrounding humans based on their head poses. Inspired by psychological studies, two separate mechanisms for the two types of perspective taking are employed, one based on line-of-sight tracing and another based on the mental rotation of the environment.Acknowledging that human head pose is only a rough indication of a human's viewpoint, the thesis introduces a novel, automated approach for ground truth eye gaze annotation. This approach is used to collect a new dataset, which covers a wide range of camera-subject distances, head poses, and gazes. A novel gaze estimation method trained on this dataset outperforms previous methods in close distance scenarios, while going beyond previous methods and also allowing eye gaze estimation in large camera-subject distances that are commonly encountered in human-robot interactions.Finally, the thesis proposes a computational model as an instantiation of a reverse engineering approach, with the aim of understanding the underlying mechanisms of perspective taking in humans. The model contains a set of forward models as building blocks, and an attentional component to reduce the model's respo

Thesis dissertation

Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018, Context-aware deep feature compression for high-speed visual tracking, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: Institute of Electrical and Electronics Engineers, Pages: 479-488, ISSN: 1063-6919

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.

Conference paper

Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo JY, Pattacini U, Low SC, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier AL, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2018, DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, Vol: 10, Pages: 1005-1022, ISSN: 2379-8920

This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

Journal article

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2018, Learning kinematic structure correspondences using multi-order similarities, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2920-2934, ISSN: 0162-8828

We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.

Journal article

Sarabia M, Young N, Canavan K, Edginton T, Demiris Y, Vizcaychipi MPet al., 2018, Assistive robotic technology to combat social isolation in acute hospital settings, International Journal of Social Robotics, Vol: 10, Pages: 607-620, ISSN: 1875-4791

Social isolation in hospitals is a well established risk factor for complications such as cognitive decline and depression. Assistive robotic technology has the potential to combat this problem, but first it is critical to investigate how hospital patients react to this technology. In order to address this question, we introduced a remotely operated NAO humanoid robot which conversed, made jokes, played music, danced and exercised with patients in a London hospital. In total, 49 patients aged between 18–100 took part in the study, 7 of whom had dementia. Our results show that a majority of patients enjoyed their interaction with NAO. We also found that age and dementia significantly affect the interaction, whereas gender does not. These results indicate that hospital patients enjoy socialising with robots, opening new avenues for future research into the potential health benefits of a social robotic companion.

Journal article

Fischer T, Chang HJ, Demiris Y, 2018, RT-GENE: Real-time eye gaze estimation in natural environments, European Conference on Computer Vision, Publisher: Springer Verlag, Pages: 339-357, ISSN: 0302-9743

In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.

Conference paper

Nguyen P, Fischer T, Chang HJ, Pattacini U, Metta G, Demiris Yet al., 2018, Transferring visuomotor learning from simulation to the real world for robotics manipulation tasks, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, Pages: 6667-6674, ISSN: 2153-0866

Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub's hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot's joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

Conference paper

Chacon Quesada R, Demiris Y, 2018, Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances, https://www.idiap.ch/workshop/iros2018/files/, IROS 2018 Workshop on Robots for Assisted Living

In this paper we present a novel augmented reality head mounted display user interface for controlling a robotic wheelchair for people with limited mobility. To lower the cognitive requirements needed to control the wheelchair, we propose integration of a smart wheelchair with an eye-tracking enabled head-mounted display. We propose a novel platform that integrates multiple user interface interaction methods for aiming at and selecting affordances derived by on-board perception capabilities such as laser-scanner readings and cameras. We demonstrate the effectiveness of the approach by evaluating our platform in two realistic scenarios: 1) Door detection, where the affordance corresponds to a Door object and the Go-Through action and 2) People detection, where the affordance corresponds to a Person and the Approach action. To the best of our knowledge, this is the first demonstration of a augmented reality head-mounted display user interface for controlling a smart wheelchair.

Conference paper

Goncalves Nunes U, Demiris Y, 2018, 3D motion segmentation of articulated rigid bodies based on RGB-D data, British Machine Vision Conference (BMVC 2018), Publisher: British Machine Vision Association (BMVA)

This paper addresses the problem of motion segmentation of articulated rigid bodiesfrom a single-view RGB-D data sequence. Current methods either perform dense motionsegmentation, and consequently are very computational demanding, or rely on sparse 2Dfeature points, which may not be sufficient to represent the entire scene. In this paper,we advocate the use of 3D semi-dense motion segmentation which also bridges somelimitations of standard 2D methods (e.g. background removal). We cast the 3D motionsegmentation problem into a subspace clustering problem, adding an adaptive spectralclustering that estimates the number of object rigid parts. The resultant method has fewparameters to adjust, takes less time than the temporal length of the scene and requiresno post-processing.

Conference paper

Wang R, Amadori P, Demiris Y, Real-time workload classification during driving using hyperNetworks, International Conference on Intelligent Robots and Systems (IROS 2018), Publisher: IEEE, ISSN: 2153-0866

Classifying human cognitive states from behavioral and physiological signals is a challenging problem with important applications in robotics. The problem is challenging due to the data variability among individual users, and sensor artifacts. In this work, we propose an end-to-end framework for real-time cognitive workload classification with mixture Hyper Long Short Term Memory Networks (m-HyperLSTM), a novelvariant of HyperNetworks. Evaluating the proposed approach on an eye-gaze pattern dataset collected from simulated driving scenarios of different cognitive demands, we show that the proposed framework outperforms previous baseline methods and achieves 83.9% precision and 87.8% recall during test. We also demonstrate the merit of our proposed architecture by showing improved performance over other LSTM-basedmethods

Conference paper

Kucukyilmaz A, Demiris Y, 2018, Learning shared control by demonstration for personalized wheelchair assistance, IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

Journal article

Fischer T, Demiris Y, 2018, A computational model for embodied visual perspective taking: from physical movements to mental simulation, Vision Meets Cognition Workshop at CVPR 2018

To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

Conference paper

Elsdon J, Demiris Y, 2018, Augmented reality for feedback in a shared control spraying task, IEEE International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers (IEEE), Pages: 1939-1946, ISSN: 1050-4729

Using industrial robots to spray structures has been investigated extensively, however interesting challenges emerge when using handheld spraying robots. In previous work we have demonstrated the use of shared control of a handheld spraying robot to assist a user in a 3D spraying task. In this paper we demonstrate the use of Augmented Reality Interfaces to increase the user's progress and task awareness. We describe our solutions to challenging calibration issues between the Microsoft Hololens system and a motion capture system without the need for well defined markers or careful alignment on the part of the user. Error relative to the motion capture system was shown to be 10mm after only a 4 second calibration routine. Secondly we outline a logical approach for visualising liquid density for an augmented reality spraying task, this system allows the user to see target regions to complete, areas that are complete and areas that have been overdosed clearly. Finally we produced a user study to investigate the level of assistance that a handheld robot utilising shared control methods should provide during a spraying task. Using a handheld spraying robot with a moving spray head did not aid the user much over simply actuating spray nozzle for them. Compared to manual control the automatic modes significantly reduced the task load experienced by the user and significantly increased the quality of the result of the spraying task, reducing the error by 33-45%.

Conference paper

Cully AHR, Demiris Y, Hierarchical Behavioral Repertoires with Unsupervised Descriptors, Genetic and Evolutionary Computation Conference 2018, Publisher: ACM

Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task.

Conference paper

Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure Pet al., 2018, iCub-HRI: A software framework for complex human-robot interaction scenarios on the iCub humanoid robot, Frontiers in Robotics and AI, Vol: 5, Pages: 1-9, ISSN: 2296-9144

Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

Journal article

Zhang F, Cully A, Demiris YIANNIS, 2017, Personalized Robot-assisted Dressing using User Modeling in Latent Spaces, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.

Conference paper

Choi J, Chang HJ, Yun S, Fischer T, Demiris Y, Choi JYet al., 2017, Attentional correlation filter network for adaptive visual tracking, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, ISSN: 1063-6919

We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.

Conference paper

Yoo YJ, Chang H, Yun S, Demiris Y, Choi JYet al., 2017, Variational autoencoded regression: high dimensional regression of visual data on complex manifold, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, Pages: 2943-2952

This paper proposes a new high dimensional regression method by merging Gaussian process regression into a variational autoencoder framework. In contrast to other regression methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. Our contributions are summarized as follows: (i) A new regression method estimating high dimensional image responses, which is not handled by existing regression algorithms, is proposed. (ii) The proposed regression method introduces a strategy to learn the latent space as well as the encoder and decoder so that the result of the regressed response in the latent space coincide with the corresponding response in the data space. (iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework. We demonstrate the robustness and effectiveness of our method through a number of experiments on various visual data regression problems.

Conference paper

Chang HJ, Demiris, 2017, Highly articulated kinematic structure estimation combining motion and skeleton information, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2165-2179, ISSN: 0162-8828

In this paper, we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view 2D image sequence. In contrast to prior motion-based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology via a successive iterative merging strategy. The iterative merge process is guided by a density weighted skeleton map which is generated from a novel object boundary generation method from sparse 2D feature points. Our main contributions can be summarised as follows: (i) An unsupervised complex articulated kinematic structure estimation method that combines motion segments with skeleton information. (ii) An iterative fine-to-coarse merging strategy for adaptive motion segmentation and structural topology embedding. (iii) A skeleton estimation method based on a novel silhouette boundary generation from sparse feature points using an adaptive model selection method. (iv) A new highly articulated object dataset with ground truth annotation. We have verified the effectiveness of our proposed method in terms of computational time and estimation accuracy through rigorous experiments. Our experiments show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00333953&limit=30&person=true