Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1011Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

270 results found

Zolotas M, Demiris Y, 2020, Towards explainable shared control using augmented reality, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE, Pages: 3020-3026

Shared control plays a pivotal role in establishing effective human-robot interactions. Traditional control-sharing methods strive to complement a human’s capabilities at safely completing a task, and thereby rely on users forming a mental model of the expected robot behaviour. However, these methods can often bewilder or frustrate users whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. To resolve this model misalignment, we introduce Explainable Shared Control as a paradigm in which assistance and information feedback are jointly considered. Augmented reality is presented as an integral component of this paradigm, by visually unveiling the robot’s inner workings to human operators. Explainable Shared Control is instantiated and tested for assistive navigation in a setup involving a robotic wheelchair and a Microsoft HoloLens with add-on eye tracking. Experimental results indicate that the introduced paradigm facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment.

Conference paper

Zambelli M, Cully A, Demiris Y, 2020, Multimodal representation models for prediction and control from partial information, Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

Journal article

Zolotas M, Demiris Y, 2020, Transparent Intent for Explainable Shared Control in Assistive Robotics, 29th International Joint Conference on Artificial Intelligence, Publisher: IJCAI-INT JOINT CONF ARTIF INTELL, Pages: 5184-5185

Conference paper

Schettino V, Demiris Y, 2020, Improving Generalisation in Learning Assistance by Demonstration for Smart Wheelchairs, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 5474-5480, ISSN: 1050-4729

Conference paper

Buizza C, Fischer T, Demiris Y, 2019, Real-time multi-person pose tracking using data assimilation, IEEE Winter Conference on Applications of Computer Vision, Publisher: IEEE

We propose a framework for the integration of data assimilation and machine learning methods in human pose estimation, with the aim of enabling any pose estimation method to be run in real-time, whilst also increasing consistency and accuracy. Data assimilation and machine learning are complementary methods: the former allows us to make use of information about the underlying dynamics of a system but lacks the flexibility of a data-based model, which we can instead obtain with the latter. Our framework presents a real-time tracking module for any single or multi-person pose estimation system. Specifically, tracking is performed by a number of Kalman filters initiated for each new person appearing in a motion sequence. This permits tracking of multiple skeletons and reduces the frequency that computationally expensive pose estimation has to be run, enabling online pose tracking. The module tracks for N frames while the pose estimates are calculated for frame (N+1). This also results in increased consistency of person identification and reduced inaccuracies due to missing joint locations and inversion of left-and right-side joints.

Conference paper

Neerincx MA, van Vught W, Henkemans OB, Oleari E, Broekens J, Peters R, Kaptein F, Demiris Y, Kiefer B, Fumagalli D, Bierman Bet al., 2019, Socio-cognitive engineering of a robotic partner for child's diabetes self-management, Frontiers in Robotics and AI, Vol: 6, Pages: 1-16, ISSN: 2296-9144

Social or humanoid robots do hardly show up in “the wild,” aiming at pervasive and enduring human benefits such as child health. This paper presents a socio-cognitive engineering (SCE) methodology that guides the ongoing research & development for an evolving, longer-lasting human-robot partnership in practice. The SCE methodology has been applied in a large European project to develop a robotic partner that supports the daily diabetes management processes of children, aged between 7 and 14 years (i.e., Personal Assistant for a healthy Lifestyle, PAL). Four partnership functions were identified and worked out (joint objectives, agreements, experience sharing, and feedback & explanation) together with a common knowledge-base and interaction design for child's prolonged disease self-management. In an iterative refinement process of three cycles, these functions, knowledge base and interactions were built, integrated, tested, refined, and extended so that the PAL robot could more and more act as an effective partner for diabetes management. The SCE methodology helped to integrate into the human-agent/robot system: (a) theories, models, and methods from different scientific disciplines, (b) technologies from different fields, (c) varying diabetes management practices, and (d) last but not least, the diverse individual and context-dependent needs of the patients and caregivers. The resulting robotic partner proved to support the children on the three basic needs of the Self-Determination Theory: autonomy, competence, and relatedness. This paper presents the R&D methodology and the human-robot partnership framework for prolonged “blended” care of children with a chronic disease (children could use it up to 6 months; the robot in the hospitals and diabetes camps, and its avatar at home). It represents a new type of human-agent/robot systems with an evolving collective intelligence. The underlying ontology and design rationale can be used

Journal article

Schettino V, Demiris Y, 2019, Inference of user-intention in remote robot wheelchair assistance using multimodal interfaces, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 4600-4606, ISSN: 2153-0858

Conference paper

Cortacero K, Fischer T, Demiris Y, 2019, RT-BENE: A Dataset and Baselines for Real-Time Blink Estimation in Natural Environments, IEEE International Conference on Computer Vision Workshops, Publisher: Institute of Electrical and Electronics Engineers Inc.

In recent years gaze estimation methods have made substantial progress, driven by the numerous application areas including human-robot interaction, visual attention estimation and foveated rendering for virtual reality headsets. However, many gaze estimation methods typically assume that the subject's eyes are open; for closed eyes, these methods provide irregular gaze estimates. Here, we address this assumption by first introducing a new open-sourced dataset with annotations of the eye-openness of more than 200,000 eye images, including more than 10,000 images where the eyes are closed. We further present baseline methods that allow for blink detection using convolutional neural networks. In extensive experiments, we show that the proposed baselines perform favourably in terms of precision and recall. We further incorporate our proposed RT-BENE baselines in the recently presented RT-GENE gaze estimation framework where it provides a real-time inference of the openness of the eyes. We argue that our work will benefit both gaze estimation and blink estimation methods, and we take steps towards unifying these methods.

Conference paper

Taniguchi T, Ugur E, Ogata T, Nagai T, Demiris Yet al., 2019, Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, FRONTIERS IN NEUROROBOTICS, Vol: 13, ISSN: 1662-5218

Journal article

Zhang F, Cully A, Demiris Y, 2019, Probabilistic real-time user posture tracking for personalized robot-assisted dressing, IEEE Transactions on Robotics, Vol: 35, Pages: 873-888, ISSN: 1552-3098

Robotic solutions to dressing assistance have the potential to provide tremendous support for elderly and disabled people. However, unexpected user movements may lead to dressing failures or even pose a risk to the user. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. In this paper, we propose a probabilistic tracking method using Bayesian networks in latent spaces, which fuses robot end-effector positions and force information to enable cameraless and real-time estimation of the user postures during dressing. The latent spaces are created before dressing by modeling the user movements with a Gaussian process latent variable model, taking the user’s movement limitations into account. We introduce a robot-assisted dressing system that combines our tracking method with hierarchical multitask control to minimize the force between the user and the robot. The experimental results demonstrate the robustness and accuracy of our tracking method. The proposed method enables the Baxter robot to provide personalized dressing assistance in putting on a sleeveless jacket for users with (simulated) upper-body impairments.

Journal article

Bagga S, Maurer B, Miller T, Quinlan L, Silvestri L, Wells D, Winqvist R, Zolotas M, Demiris Yet al., 2019, instruMentor: An Interactive Robot for Musical Instrument Tutoring, Towards Autonomous Robotic Systems Conference, Publisher: Springer International Publishing, Pages: 303-315, ISSN: 0302-9743

Conference paper

Wang R, Ciliberto C, Amadori P, Demiris Yet al., 2019, Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation, Thirty-sixth International Conference on Machine Learning, Publisher: Proceedings of International Conference on Machine Learning (ICML-2019)

We consider the problem of imitation learning from a finite set of experttrajectories, without access to reinforcement signals. The classical approachof extracting the expert's reward function via inverse reinforcement learning,followed by reinforcement learning is indirect and may be computationallyexpensive. Recent generative adversarial methods based on matching the policydistribution between the expert and the agent could be unstable duringtraining. We propose a new framework for imitation learning by estimating thesupport of the expert policy to compute a fixed reward function, which allowsus to re-frame imitation learning within the standard reinforcement learningsetting. We demonstrate the efficacy of our reward function on both discreteand continuous domains, achieving comparable or better performance than thestate of the art under different reinforcement learning algorithms.

Conference paper

Cully A, Demiris Y, 2019, Online knowledge level tracking with data-driven student models and collaborative filtering, IEEE Transactions on Knowledge and Data Engineering, Vol: 32, Pages: 2000-2013, ISSN: 1041-4347

Intelligent Tutoring Systems are promising tools for delivering optimal and personalised learning experiences to students. A key component for their personalisation is the student model, which infers the knowledge level of the students to balance the difficulty of the exercises. While important advances have been achieved, several challenges remain. In particular, the models should be able to track in real-time the evolution of the students' knowledge levels. These evolutions are likely to follow different profiles for each student, while measuring the exact knowledge level remains difficult given the limited and noisy information provided by the interactions. This paper introduces a novel model that addresses these challenges with three contributions: 1) the model relies on Gaussian Processes to track online the evolution of the student's knowledge level over time, 2) it uses collaborative filtering to rapidly provide long-term predictions by leveraging the information from previous users, and 3) it automatically generates abstract representations of knowledge components via automatic relevance determination of covariance matrices. The model is evaluated on three datasets, including real users. The results demonstrate that the model converges to accurate predictions in average 4 times faster than the compared methods.

Journal article

Celiktutan O, Demiris Y, 2019, Inferring human knowledgeability from eye gaze in mobile learning environments, 15th European Conference on Computer Vision (ECCV), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 193-209, ISSN: 0302-9743

What people look at during a visual task reflects an interplay between ocular motor functions and cognitive processes. In this paper, we study the links between eye gaze and cognitive states to investigate whether eye gaze reveal information about an individual’s knowledgeability. We focus on a mobile learning scenario where a user and a virtual agent play a quiz game using a hand-held mobile device. To the best of our knowledge, this is the first attempt to predict user’s knowledgeability from eye gaze using a noninvasive eye tracking method on mobile devices: we perform gaze estimation using front-facing camera of mobile devices in contrast to using specialised eye tracking devices. First, we define a set of eye movement features that are discriminative for inferring user’s knowledgeability. Next, we train a model to predict users’ knowledgeability in the course of responding to a question. We obtain a classification performance of 59.1% achieving human performance, using eye movement features only, which has implications for (1) adapting behaviours of the virtual agent to user’s needs (e.g., virtual agent can give hints); (2) personalising quiz questions to the user’s perceived knowledgeability.

Conference paper

Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Zajc LČ, Vojír T, Bhat G, Lukežič A, Eldesokey A, Fernández G, García-Martín Á, Iglesias-Arias Á, Alatan AA, González-García A, Petrosino A, Memarmoghadam A, Vedaldi A, Muhič A, He A, Smeulders A, Perera AG, Li B, Chen B, Kim C, Xu C, Xiong C, Tian C, Luo C, Sun C, Hao C, Kim D, Mishra D, Chen D, Wang D, Wee D, Gavves E, Gundogdu E, Velasco-Salido E, Khan FS, Yang F, Zhao F, Li F, Battistone F, De Ath G, Subrahmanyam GRKS, Bastos G, Ling H, Galoogahi HK, Lee H, Li H, Zhao H, Fan H, Zhang H, Possegger H, Li H, Lu H, Zhi H, Li H, Lee H, Chang HJ, Drummond I, Valmadre J, Martin JS, Chahl J, Choi JY, Li J, Wang J, Qi J, Sung J, Johnander J, Henriques J, Choi J, van de Weijer J, Herranz JR, Martínez JM, Kittler J, Zhuang J, Gao J, Grm K, Zhang L, Wang L, Yang L, Rout L, Si L, Bertinetto L, Chu L, Che M, Maresca ME, Danelljan M, Yang MH, Abdelpakey M, Shehata M, Kang Met al., 2019, The sixth visual object tracking VOT2018 challenge results, European Conference on Computer Vision, Publisher: Springer, Pages: 3-53, ISSN: 0302-9743

The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

Conference paper

Wang R, Amadori P, Demiris Y, 2019, Real-time workload classification during driving using hyperNetworks, International Conference on Intelligent Robots and Systems (IROS 2018), Publisher: IEEE, ISSN: 2153-0866

Classifying human cognitive states from behavioral and physiological signals is a challenging problem with important applications in robotics. The problem is challenging due to the data variability among individual users, and sensor artifacts. In this work, we propose an end-to-end framework for real-time cognitive workload classification with mixture Hyper Long Short Term Memory Networks (m-HyperLSTM), a novelvariant of HyperNetworks. Evaluating the proposed approach on an eye-gaze pattern dataset collected from simulated driving scenarios of different cognitive demands, we show that the proposed framework outperforms previous baseline methods and achieves 83.9% precision and 87.8% recall during test. We also demonstrate the merit of our proposed architecture by showing improved performance over other LSTM-basedmethods

Conference paper

Zolotas M, Elsdon J, Demiris Y, 2019, Head-mounted augmented reality for explainable robotic wheelchair assistance, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

Robotic wheelchairs with built-in assistive fea-tures, such as shared control, are an emerging means ofproviding independent mobility to severely disabled individuals.However, patients often struggle to build a mental model oftheir wheelchair’s behaviour under different environmentalconditions. Motivated by the desire to help users bridge thisgap in perception, we propose a novel augmented realitysystem using a Microsoft Hololens as a head-mounted aid forwheelchair navigation. The system displays visual feedback tothe wearer as a way of explaining the underlying dynamicsof the wheelchair’s shared controller and its predicted futurestates. To investigate the influence of different interface designoptions, a pilot study was also conducted. We evaluated theacceptance rate and learning curve of an immersive wheelchairtraining regime, revealing preliminary insights into the potentialbeneficial and adverse nature of different augmented realitycues for assistive navigation. In particular, we demonstrate thatcare should be taken in the presentation of information, witheffort-reducing cues for augmented information acquisition (forexample, a rear-view display) being the most appreciated.

Conference paper

Di Veroli C, Le CA, Lemaire T, Makabu E, Nur A, Ooi V, Park JY, Sanna F, Chacon R, Demiris Yet al., 2019, LibRob: An autonomous assistive librarian, Pages: 15-26, ISBN: 9783030253318

This study explores how new robotic systems can help library users efficiently locate the book they require. A survey conducted among Imperial College students has shown an absence of a time-efficient and organised method to find the books they are looking for in the college library. The solution implemented, LibRob, is an automated assistive robot that gives guidance to the users in finding the book they are searching for in an interactive manner to deliver a more satisfactory experience. LibRob is able to process a search request either by speech or by text and return a list of relevant books by author, subject or title. Once the user selects the book of interest, LibRob guides them to the shelf containing the book, then returns to its base station on completion. Experimental results demonstrate that the robot reduces the time necessary to find a book by 47.4%, and left 80% of the users satisfied with their experience, proving that human-robot interactions can greatly improve the efficiency of basic activities within a library environment.

Book chapter

Fischer T, 2019, Perspective Taking in Robots: A Framework and Computational Model

Humans are inherently social beings that benefit from their perceptional capability to embody another point of view. This thesis examines this capability, termed perspective taking, using a mixed forward/reverse engineering approach. While previous approaches were limited to known, artificial environments, the proposed approach results in a perceptional framework that can be used in unconstrained environments while at the same time detailing the mechanisms that humans use to infer the world's characteristics from another viewpoint.First, the thesis explores a forward engineering approach by outlining the required perceptional components and implementing these components on a humanoid iCub robot. Prior to and during the perspective taking, the iCub learns the environment and recognizes its constituent objects before approximating the gaze of surrounding humans based on their head poses. Inspired by psychological studies, two separate mechanisms for the two types of perspective taking are employed, one based on line-of-sight tracing and another based on the mental rotation of the environment.Acknowledging that human head pose is only a rough indication of a human's viewpoint, the thesis introduces a novel, automated approach for ground truth eye gaze annotation. This approach is used to collect a new dataset, which covers a wide range of camera-subject distances, head poses, and gazes. A novel gaze estimation method trained on this dataset outperforms previous methods in close distance scenarios, while going beyond previous methods and also allowing eye gaze estimation in large camera-subject distances that are commonly encountered in human-robot interactions.Finally, the thesis proposes a computational model as an instantiation of a reverse engineering approach, with the aim of understanding the underlying mechanisms of perspective taking in humans. The model contains a set of forward models as building blocks, and an attentional component to reduce the model's respo

Thesis dissertation

Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018, Context-aware deep feature compression for high-speed visual tracking, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: Institute of Electrical and Electronics Engineers, Pages: 479-488, ISSN: 1063-6919

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.

Conference paper

Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo JY, Pattacini U, Low SC, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier AL, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2018, DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, Vol: 10, Pages: 1005-1022, ISSN: 2379-8920

This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

Journal article

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2018, Learning kinematic structure correspondences using multi-order similarities, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2920-2934, ISSN: 0162-8828

We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.

Journal article

Sarabia M, Young N, Canavan K, Edginton T, Demiris Y, Vizcaychipi MPet al., 2018, Assistive robotic technology to combat social isolation in acute hospital settings, International Journal of Social Robotics, Vol: 10, Pages: 607-620, ISSN: 1875-4791

Social isolation in hospitals is a well established risk factor for complications such as cognitive decline and depression. Assistive robotic technology has the potential to combat this problem, but first it is critical to investigate how hospital patients react to this technology. In order to address this question, we introduced a remotely operated NAO humanoid robot which conversed, made jokes, played music, danced and exercised with patients in a London hospital. In total, 49 patients aged between 18–100 took part in the study, 7 of whom had dementia. Our results show that a majority of patients enjoyed their interaction with NAO. We also found that age and dementia significantly affect the interaction, whereas gender does not. These results indicate that hospital patients enjoy socialising with robots, opening new avenues for future research into the potential health benefits of a social robotic companion.

Journal article

Fischer T, Chang HJ, Demiris Y, 2018, RT-GENE: Real-time eye gaze estimation in natural environments, European Conference on Computer Vision, Publisher: Springer Verlag, Pages: 339-357, ISSN: 0302-9743

In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.

Conference paper

Nguyen P, Fischer T, Chang HJ, Pattacini U, Metta G, Demiris Yet al., 2018, Transferring visuomotor learning from simulation to the real world for robotics manipulation tasks, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, Pages: 6667-6674, ISSN: 2153-0866

Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub's hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot's joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

Conference paper

Chacon Quesada R, Demiris Y, 2018, Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances, https://www.idiap.ch/workshop/iros2018/files/, IROS 2018 Workshop on Robots for Assisted Living

In this paper we present a novel augmented reality head mounted display user interface for controlling a robotic wheelchair for people with limited mobility. To lower the cognitive requirements needed to control the wheelchair, we propose integration of a smart wheelchair with an eye-tracking enabled head-mounted display. We propose a novel platform that integrates multiple user interface interaction methods for aiming at and selecting affordances derived by on-board perception capabilities such as laser-scanner readings and cameras. We demonstrate the effectiveness of the approach by evaluating our platform in two realistic scenarios: 1) Door detection, where the affordance corresponds to a Door object and the Go-Through action and 2) People detection, where the affordance corresponds to a Person and the Approach action. To the best of our knowledge, this is the first demonstration of a augmented reality head-mounted display user interface for controlling a smart wheelchair.

Conference paper

Goncalves Nunes U, Demiris Y, 2018, 3D motion segmentation of articulated rigid bodies based on RGB-D data, British Machine Vision Conference (BMVC 2018), Publisher: British Machine Vision Association (BMVA)

This paper addresses the problem of motion segmentation of articulated rigid bodiesfrom a single-view RGB-D data sequence. Current methods either perform dense motionsegmentation, and consequently are very computational demanding, or rely on sparse 2Dfeature points, which may not be sufficient to represent the entire scene. In this paper,we advocate the use of 3D semi-dense motion segmentation which also bridges somelimitations of standard 2D methods (e.g. background removal). We cast the 3D motionsegmentation problem into a subspace clustering problem, adding an adaptive spectralclustering that estimates the number of object rigid parts. The resultant method has fewparameters to adjust, takes less time than the temporal length of the scene and requiresno post-processing.

Conference paper

Cully AHR, Demiris Y, 2018, Hierarchical behavioral repertoires with unsupervised descriptors, Genetic and Evolutionary Computation Conference 2018, Publisher: ACM

Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task.

Conference paper

Kucukyilmaz A, Demiris Y, 2018, Learning shared control by demonstration for personalized wheelchair assistance, IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

Journal article

Fischer T, Demiris Y, 2018, A computational model for embodied visual perspective taking: from physical movements to mental simulation, Vision Meets Cognition Workshop at CVPR 2018

To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00333953&limit=30&person=true&page=3&respub-action=search.html