Imperial College London

Professor Yiannis Demiris

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Human-Centred Robotics, Head of ISN
 
 
 
//

Contact

 

+44 (0)20 7594 6300y.demiris Website

 
 
//

Location

 

1011Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

272 results found

Kucukyilmaz A, Demiris Y, 2018, Learning shared control by demonstration for personalized wheelchair assistance, IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

Journal article

Fischer T, Demiris Y, 2018, A computational model for embodied visual perspective taking: from physical movements to mental simulation, Vision Meets Cognition Workshop at CVPR 2018

To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

Conference paper

Elsdon J, Demiris Y, 2018, Augmented reality for feedback in a shared control spraying task, IEEE International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers (IEEE), Pages: 1939-1946, ISSN: 1050-4729

Using industrial robots to spray structures has been investigated extensively, however interesting challenges emerge when using handheld spraying robots. In previous work we have demonstrated the use of shared control of a handheld spraying robot to assist a user in a 3D spraying task. In this paper we demonstrate the use of Augmented Reality Interfaces to increase the user's progress and task awareness. We describe our solutions to challenging calibration issues between the Microsoft Hololens system and a motion capture system without the need for well defined markers or careful alignment on the part of the user. Error relative to the motion capture system was shown to be 10mm after only a 4 second calibration routine. Secondly we outline a logical approach for visualising liquid density for an augmented reality spraying task, this system allows the user to see target regions to complete, areas that are complete and areas that have been overdosed clearly. Finally we produced a user study to investigate the level of assistance that a handheld robot utilising shared control methods should provide during a spraying task. Using a handheld spraying robot with a moving spray head did not aid the user much over simply actuating spray nozzle for them. Compared to manual control the automatic modes significantly reduced the task load experienced by the user and significantly increased the quality of the result of the spraying task, reducing the error by 33-45%.

Conference paper

Cully AHR, Demiris Y, 2018, Quality and diversity optimization: a unifying modular framework, IEEE Transactions on Evolutionary Computation, Vol: 22, Pages: 245-259, ISSN: 1941-0026

The optimization of functions to find the best solution according to one or several objectives has a central role in many engineering and research fields. Recently, a new family of optimization algorithms, named Quality-Diversity optimization, has been introduced, and contrasts with classic algorithms. Instead of searching for a single solution, Quality-Diversity algorithms are searching for a large collection of both diverse and high-performing solutions. The role of this collection is to cover the range of possible solution types as much as possible, and to contain the best solution for each type. The contribution of this paper is threefold. Firstly, we present a unifying framework of Quality-Diversity optimization algorithms that covers the two main algorithms of this family (Multi-dimensional Archive of Phenotypic Elites and the Novelty Search with Local Competition), and that highlights the large variety of variants that can be investigated within this family. Secondly, we propose algorithms with a new selection mechanism for Quality-Diversity algorithms that outperforms all the algorithms tested in this paper. Lastly, we present a new collection management that overcomes the erosion issues observed when using unstructured collections. These three contributions are supported by extensive experimental comparisons of Quality-Diversity algorithms on three different experimental scenarios.

Journal article

Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure Pet al., 2018, iCub-HRI: A software framework for complex human-robot interaction scenarios on the iCub humanoid robot, Frontiers in Robotics and AI, Vol: 5, Pages: 1-9, ISSN: 2296-9144

Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

Journal article

Zhang F, Cully A, Demiris YIANNIS, 2017, Personalized Robot-assisted Dressing using User Modeling in Latent Spaces, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.

Conference paper

Choi J, Chang HJ, Yun S, Fischer T, Demiris Y, Choi JYet al., 2017, Attentional correlation filter network for adaptive visual tracking, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, ISSN: 1063-6919

We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.

Conference paper

Yoo YJ, Chang H, Yun S, Demiris Y, Choi JYet al., 2017, Variational autoencoded regression: high dimensional regression of visual data on complex manifold, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, Pages: 2943-2952

This paper proposes a new high dimensional regression method by merging Gaussian process regression into a variational autoencoder framework. In contrast to other regression methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. Our contributions are summarized as follows: (i) A new regression method estimating high dimensional image responses, which is not handled by existing regression algorithms, is proposed. (ii) The proposed regression method introduces a strategy to learn the latent space as well as the encoder and decoder so that the result of the regressed response in the latent space coincide with the corresponding response in the data space. (iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework. We demonstrate the robustness and effectiveness of our method through a number of experiments on various visual data regression problems.

Conference paper

Chang HJ, Demiris Y, 2017, Highly articulated kinematic structure estimation combining motion and skeleton information, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2165-2179, ISSN: 0162-8828

In this paper, we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view 2D image sequence. In contrast to prior motion-based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology via a successive iterative merging strategy. The iterative merge process is guided by a density weighted skeleton map which is generated from a novel object boundary generation method from sparse 2D feature points. Our main contributions can be summarised as follows: (i) An unsupervised complex articulated kinematic structure estimation method that combines motion segments with skeleton information. (ii) An iterative fine-to-coarse merging strategy for adaptive motion segmentation and structural topology embedding. (iii) A skeleton estimation method based on a novel silhouette boundary generation from sparse feature points using an adaptive model selection method. (iv) A new highly articulated object dataset with ground truth annotation. We have verified the effectiveness of our proposed method in terms of computational time and estimation accuracy through rigorous experiments. Our experiments show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

Journal article

Elsdon J, Demiris Y, 2017, Assisted painting of 3D structures using shared control with a hand-held robot, IEEE International Conference on Robotics and Automation, Publisher: IEEE, Pages: 4891-4897

Abstract— We present a shared control method of painting3D geometries, using a handheld robot which has a singleautonomously controlled degree of freedom. The user scansthe robot near to the desired painting location, the singlemovement axis moves the spray head to achieve the requiredpaint distribution. A simultaneous simulation of the sprayingprocedure is performed, giving an open loop approximationof the current state of the painting. An online prediction ofthe best path for the spray nozzle actuation is calculated ina receding horizon fashion. This is calculated by producing amap of the paint required in the 2D space defined by nozzleposition on the gantry and the time into the future. A directedgraph then extracts its edge weights from this paint density mapand Dijkstra’s algorithm is then used to find the candidate forthe most effective path. Due to the heavy parallelisation of thisapproach and the majority of the calculations taking place on aGPU we can run the prediction loop in 32.6ms for a predictionhorizon of 1 second, this approach is computationally efficient,outperforming a greedy algorithm. The path chosen by theproposed method on average chooses a path in the top 15%of all paths as calculated by exhaustive testing. This approachenables development of real time path planning for assistedspray painting onto complicated 3D geometries. This methodcould be applied to applications such as assistive painting forpeople with disabilities, or accurate placement of liquid whenlarge scale positioning of the head is too expensive.

Conference paper

Georgiou T, Demiris Y, 2017, Adaptive user modelling in car racing games using behavioural and physiological data, User Modeling and User-Adapted Interaction, Vol: 27, Pages: 267-311, ISSN: 1573-1391

Personalised content adaptation has great potential to increase user engagement in video games. Procedural generation of user-tailored content increases the self-motivation of players as they immerse themselves in the virtual world. An adaptive user model is needed to capture the skills of the player and enable automatic game content altering algorithms to fit the individual user. We propose an adaptive user modelling approach using a combination of unobtrusive physiological data to identify strengths and weaknesses in user performance in car racing games. Our system creates user-tailored tracks to improve driving habits and user experience, and to keep engagement at high levels. The user modelling approach adopts concepts from the Trace Theory framework; it uses machine learning to extract features from the user’s physiological data and game-related actions, and cluster them into low level primitives. These primitives are transformed and evaluated into higher level abstractions such as experience, exploration and attention. These abstractions are subsequently used to provide track alteration decisions for the player. Collection of data and feedback from 52 users allowed us to associate key model variables and outcomes to user responses, and to verify that the model provides statistically significant decisions personalised to the individual player. Tailored game content variations between users in our experiments, as well as the correlations with user satisfaction demonstrate that our algorithm is able to automatically incorporate user feedback in subsequent procedural content generation.

Journal article

Wang R, Cully A, Chang HJ, Demiris Yet al., 2017, MAGAN: Margin Adaptation for Generative Adversarial Networks

We propose the Margin Adaptation for Generative Adversarial Networks (MAGANs)algorithm, a novel training procedure for GANs to improve stability andperformance by using an adaptive hinge loss function. We estimate theappropriate hinge loss margin with the expected energy of the targetdistribution, and derive principled criteria for when to update the margin. Weprove that our method converges to its global optimum under certainassumptions. Evaluated on the task of unsupervised image generation, theproposed training procedure is simple yet robust on a diverse set of data, andachieves qualitative and quantitative improvements compared to thestate-of-the-art.

Working paper

Georgiou T, Demiris Y, 2017, Personalised Track Design in Car Racing Games, Computational Intelligence and Games, Publisher: IEEE, ISSN: 2325-4289

Real-time adaptation of computer games’ content tothe users’ skills and abilities can enhance the player’s engagementand immersion. Understanding of the user’s potential whileplaying is of high importance in order to allow the successfulprocedural generation of user-tailored content. We investigatehow player models can be created in car racing games. Our usermodel uses a combination of data from unobtrusive sensors, whilethe user is playing a car racing simulator. It extracts featuresthrough machine learning techniques, which are then used tocomprehend the user’s gameplay, by utilising the educationaltheoretical frameworks of the Concept of Flow and Zone ofProximal Development. The end result is to provide at a nextstage a new track that fits to the user needs, which aids boththe training of the driver and their engagement in the game.In order to validate that the system is designing personalisedtracks, we associated the average performance from 41 usersthat played the game, with the difficulty factor of the generatedtrack. In addition, the variation in paths of the implementedtracks between users provides a good indicator for the suitabilityof the system.

Conference paper

Korkinof D, Demiris Y, 2016, Multi-task and multi-kernel gaussian process dynamical systems, Pattern Recognition, Vol: 66, Pages: 190-201, ISSN: 1873-5142

In this work, we propose a novel method for rectifying damaged motion sequences in an unsupervised manner. In order to achieve maximal accuracy, the proposed model takes advantage of three key properties of the data: their sequential nature, the redundancy that manifests itself among repetitions of the same task, and the potential of knowledge transfer across different tasks. In order to do so, we formulate a factor model consisting of Gaussian Process Dynamical Systems (GPDS), where each factor corresponds to a single basic pattern in time and is able to represent their sequential nature. Factors collectively form a dictionary of fundamental trajectories shared among all sequences, thus able to capture recurrent patterns within the same or across different tasks. We employ variational inference to learn directly from incomplete sequences and perform maximum a-posteriori (MAP) estimates of the missing values. We have evaluated our model with a number of motion datasets, including robotic and human motion capture data. We have compared our approach to well-established methods in the literature in terms of their reconstruction error and our results indicate significant accuracy improvement across different datasets and missing data ratios. Concluding, we investigate the performance benefits of the multi-task learning scenario and how this improvement relates to the extent of component sharing that takes place.

Journal article

Choi J, Chang H, Jeong J, Demiris Y, Choi JYet al., 2016, Visual tracking using attention-modulated disintegration and integration, IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, ISSN: 1063-6919

In this paper, we present a novel attention-modulatedvisual tracking algorithm that decomposes an object intointo multiple cognitive units, and trains multiple elemen-tary trackers in order to modulate the distribution of at-tention according to various feature and kernel types. Inthe integration stage it recombines the units to memorizeand recognize the target object effectively. With respectto the elementary trackers, we present a novel attentionalfeature-based correlation filter (AtCF) that focuses on dis-tinctive attentional features. The effectiveness of the pro-posed algorithm is validated through experimental compar-ison with state-of-the-art methods on widely-used trackingbenchmark datasets.

Conference paper

Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2016, Kinematic structure correspondences via hypergraph matching, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4216-4225, ISSN: 1063-6919

In this paper, we present a novel framework for finding the kinematic structure correspondence between two objects in videos via hypergraph matching. In contrast to prior appearance and graph alignment based matching methods which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Our main contributions can be summarised as follows: (i) casting the kinematic structure correspondence problem into a hypergraph matching problem, incorporating multi-order similarities with normalising weights, (ii) a structural topology similarity measure by a new topology constrained subgraph isomorphism aggregation, (iii) a kinematic correlation measure between pairwise nodes, and (iv) a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on complex articulated synthetic and real data.

Conference paper

Zambelli M, Demiris Y, 2016, Multimodal Imitation using Self-learned Sensorimotor Representations, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, ISSN: 2153-0866

Although many tasks intrinsically involve multiplemodalities, often only data from a single modality are used toimprove complex robots acquisition of new skills. We presenta method to equip robots with multimodal learning skills toachieve multimodal imitation on-the-fly on multiple concurrenttask spaces, including vision, touch and proprioception, onlyusing self-learned multimodal sensorimotor relations, withoutthe need of solving inverse kinematic problems or explicit analyticalmodels formulation. We evaluate the proposed methodon a humanoid iCub robot learning to interact with a pianokeyboard and imitating a human demonstration. Since noassumptions are made on the kinematic structure of the robot,the method can be also applied to different robotic platforms.

Conference paper

Gao Y, Chang HJ, Demiris Y, 2016, Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

We propose an online iterative path optimisationmethod to enable a Baxter humanoid robot to assist humanusers to dress. The robot searches for the optimal personaliseddressing path using vision and force sensor information: visioninformation is used to recognise the human pose and model themovement space of upper-body joints; force sensor informationis used for the robot to detect external force resistance andto locally adjust its motion. We propose a new stochastic pathoptimisation method based on adaptive moment estimation. Wefirst compare the proposed method with other path optimisationalgorithms on synthetic data. Experimental results show thatthe performance of the method achieves the smallest error withfewer iterations and less computation time. We also evaluatereal-world data by enabling the Baxter robot to assist realhuman users with their dressing.

Conference paper

Zambelli M, Demiris Y, 2016, Online multimodal ensemble learning using self-learned sensorimotor representations, IEEE Transactions on Cognitive and Developmental Systems, Vol: 9, Pages: 113-126, ISSN: 2379-8920

Internal models play a key role in cognitive agentsby providing on the one hand predictions of sensory consequencesof motor commands (forward models), and on the other handinverse mappings (inverse models) to realise tasks involvingcontrol loops, such as imitation tasks. The ability to predictand generate new actions in continuously evolving environmentsintrinsically requiring the use of different sensory modalities isparticularly relevant for autonomous robots, which must alsobe able to adapt their models online. We present a learningarchitecture based on self-learned multimodal sensorimotor rep-resentations. To attain accurate forward models, we propose anonline heterogeneous ensemble learning method that allows usto improve the prediction accuracy by leveraging differences ofmultiple diverse predictors. We further propose a method tolearn inverse models on-the-fly to equip a robot with multimodallearning skills to perform imitation tasks using multiple sensorymodalities. We have evaluated the proposed methods on aniCub humanoid robot. Since no assumptions are made on therobot kinematic/dynamic structure, the method can be appliedto different robotic platforms.

Journal article

Petit M, Fischer T, Demiris Y, 2016, Towards the Emergence of Procedural Memories from Lifelong Multi-Modal Streaming Memories for Cognitive Robots, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IEEE/RSJ IROS

Various research topics are emerging as the demand for intelligent lifelong interactions between robot and humans increases. Among them, we can find the examination of persistent storage, the continuous unsupervised annotation of memories and the usage of data at high-frequency over long periods of time. We recently proposed a lifelong autobiographical memory architecture tackling some of these challenges, allowing the iCub humanoid robot to 1) create new memories for both actions that are self-executed and observed from humans, 2) continuously annotate these actions in an unsupervised manner, and 3) use reasoning modules to augment these memories a-posteriori. In this paper, we present a reasoning algorithm which generalises the robots’ understanding of actions by finding the point of commonalities with the former ones. In particular, we generated and labelled templates of pointing actions in different directions. This represents a first step towards the emergence of a procedural memory within a long-term autobiographical memory framework for robots.

Conference paper

Zambelli M, Fischer T, Petit M, Chang HJ, Cully A, Demiris Yet al., 2016, Towards Anchoring Self-Learned Representations to Those of Other Agents, Workshop on Bio-inspired Social Robot Learning in Home Scenarios IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: Institute of Electrical and Electronics Engineers (IEEE)

In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences, 2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discoverscorrespondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent. We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step towards the emergence of a mirror neuron like system based on self-learned representations.

Conference paper

Petit M, Fischer T, Demiris Y, 2016, Lifelong Augmentation of Multi-Modal Streaming Autobiographical Memories, IEEE Transactions on Cognitive and Developmental Systems, Vol: 8, Pages: 201-213, ISSN: 2379-8920

Robot systems that interact with humans over extended periods of time will benefit from storing and recalling large amounts of accumulated sensorimotor and interaction data. We provide a principled framework for the cumulative organisation of streaming autobiographical data so that data can be continuously processed and augmented as the processing and reasoning abilities of the agent develop and further interactions with humans take place. As an example, we show how a kinematic structure learning algorithm reasons a-posteriori about the skeleton of a human hand. A partner can be asked to provide feedback about the augmented memories, which can in turn be supplied to the reasoning processes in order to adapt their parameters. We employ active, multi-modal remembering, so the robot as well as humans can gain insights of both the original and augmented memories. Our framework is capable of storing discrete and continuous data in real-time. The data can cover multiple modalities and several layers of abstraction (e.g. from raw sound signals over sentences to extracted meanings). We show a typical interaction with a human partner using an iCub humanoid robot. The framework is implemented in a platform-independent manner. In particular, we validate its multi platform capabilities using the iCub, Baxter and NAO robots. We also provide an interface to cloud based services, which allow automatic annotation of episodes. Our framework is geared towards the developmental robotics community, as it 1) provides a variety of interfaces for other modules, 2) unifies previous works on autobiographical memory, and 3) is licensed as open source software.

Journal article

Ros R, Oleari E, Pozzi C, Sacchitelli F, Baranzini D, Bagherzadhalimi A, Sanna A, Demiris Yet al., 2016, A motivational approach to support healthy habits in long-term child–robot interaction, International Journal of Social Robotics, Vol: 8, Pages: 599-617, ISSN: 1875-4791

We examine the use of role-switching as an intrinsic motivational mechanism to increase engagement in long-term child–robot interaction. The present study describes a learning framework where children between 9 and 11-years-old interact with a robot to improve their knowledge and habits with regards to healthy life-styles. Experiments were carried out in Italy where 41 children were divided in three groups interacting with: (i) a robot with a role-switching mechanism, (ii) a robot without a role-switching mechanism and (iii) an interactive video. Additionally, a control group composed of 43 more children, who were not exposed to any interactive approach, was used as a baseline of the study. During the intervention period, the three groups were exposed to three interactive sessions once a week. The aim of the study was to find any difference in healthy-habits acquisition based on alternative interactive systems, and to evaluate the effectiveness of the role-switch approach as a trigger for engagement and motivation while interacting with a robot. The results provide evidence that the rate of children adopting healthy habits during the intervention period was higher for those interacting with a robot. Moreover, alignment with the robot behaviour and achievement of higher engagement levels were also observed for those children interacting with the robot that used the role-switching mechanism. This supports the notion that role-switching facilitates sustained long-interactions between a child and a robot.

Journal article

Gao Y, Chang HJ, Demiris Y, 2016, Personalised assistive dressing by humanoid robots using multi-modal information, Workshop on Human-Robot Interfaces for Enhanced Physical Interactions at ICRA

In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement rangemodels and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting tothe force resistances.

Conference paper

Petit M, Demiris Y, 2016, Hierarchical action learning by instruction through interactive grounding of body parts and proto-actions, IEEE International Conference on Robotics and Automation, Publisher: IEEE, Pages: 3375-3382

Learning by instruction allows humans programming a robot to achieve a task using spoken language, without the requirement of being able to do the task themselves, which can be problematic for users with motor impairments. We provide a developmental framework to program the humanoid robot iCub without any hand-coded a-priori knowledge about any motor skills. Inspired by child development theories, the system involves hierarchical learning, starting with the human verbally labelling robot body parts. The robot can then focus its attention on a precise body part during robot motor babbling, and link the on-the-fly spoken descriptions of proto-actions to angle values of a specific joint. The direct grounding of proto-actions is possible through the use of a linear model which calculates the effects on the joint of the proto-action and the body part used, allowing a generalisation of the proto-action if the joint has never been used before. Eventually, transferring the grounding is allowed via learning by instructions where humans can combine the newly acquired proto-actions to build primitives and more complex actions by scaffolding them. The framework has been validated using a humanoid robot iCub, which is able to learn without any prior knowledge: 1) the name of its fingers and the corresponding joint number, 2) how to fold and unfold them and 3) how to close or open its hand and how to show numbers with its fingers.

Conference paper

Fischer T, Demiris Y, 2016, Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments, 2016 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3309-3316

Perspective taking enables humans to imagine the world from another viewpoint. This allows reasoning about the state of other agents, which in turn is used to more accurately predict their behavior. In this paper, we equip an iCub humanoid robot with the ability to perform visuospatial perspective taking (PT) using a single depth camera mounted above the robot. Our approach has the distinct benefit that the robot can be used in unconstrained environments, as opposed to previous works which employ marker-based motion capture systems. Prior to and during the PT, the iCub learns the environment, recognizes objects within the environment, and estimates the gaze of surrounding humans. We propose a new head pose estimation algorithm which shows a performance boost by normalizing the depth data to be aligned with the human head. Inspired by psychological studies, we employ two separate mechanisms for the two different types of PT. We implement line of sight tracing to determine whether an object is visible to the humans (level 1 PT). For more complex PT tasks (level 2 PT), the acquired point cloud is mentally rotated, which allows algorithms to reason as if the input data was acquired from an egocentric perspective. We show that this can be used to better judge where object are in relation to the humans. The multifaceted improvements to the PT pipeline advance the state of the art, and move PT in robots to markerless, unconstrained environments.

Conference paper

Ribes A, Cerquides J, Demiris Y, Lopez de Mantaras Ret al., 2016, Active Learning of Object and Body Models with Time Constraints on a Humanoid Robot, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 26-41, ISSN: 2379-8920

Journal article

Coninx A, Baxter P, Oleari E, Bellini S, Bierman B, Henkemans OB, Canamero L, Cosi P, Enescu V, Espinoza RR, Hiolle A, Humbert R, Kiefer B, Kruijff-Korbayova I, Looije R-M, Mosconi M, Neerincx M, Paci G, Patsis G, Pozzi C, Sacchitelli F, Sahli H, Sanna A, Sommavilla G, Tesser F, Demiris Y, Belpaeme Tet al., 2016, Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users, JOURNAL OF HUMAN-ROBOT INTERACTION, Vol: 5, Pages: 32-67, ISSN: 2163-0364

Journal article

Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Cehovin L, Vojir T, Hager G, Lukezic A, Fernandez G, Gupta A, Petrosino A, Memarmoghadam A, Garcia-Martin A, Montero AS, Vedaldi A, Robinson A, Ma AJ, Varfolomieiev A, Alatan A, Erdem A, Ghanem B, Liu B, Han B, Martinez B, Chang C-M, Xu C, Sun C, Kim D, Chen D, Du D, Mishra D, Yeung D-Y, Gundogdu E, Erdem E, Khan F, Porikli F, Zhao F, Bunyak F, Battistone F, Zhu G, Roffo G, Subrahmanyam GRKS, Bastos G, Seetharaman G, Medeiros H, Li H, Qi H, Bischof H, Possegger H, Lu H, Lee H, Nam H, Chang HJ, Drummond I, Valmadre J, Jeong J-C, Cho J-I, Lee J-Y, Zhu J, Feng J, Gao J, Choi JY, Xiao J, Kim J-W, Jeong J, Henriques JF, Lang J, Choi J, Martinez JM, Xing J, Gao J, Palaniappan K, Lebeda K, Gao K, Mikolajczyk K, Qin L, Wang L, Wen L, Bertinetto L, Rapuru MK, Poostchi M, Maresca M, Danelljan M, Mueller M, Zhang M, Arens M, Valstar M, Tang M, Baek M, Khan MH, Wang N, Fan N, Al-Shakarji N, Miksik O, Akin O, Moallem P, Senna P, Torr PHS, Yuen PC, Huang Q, Martin-Nieto R, Pelapur R, Bowden R, Laganiere R, Stolkin R, Walsh R, Krah SB, Li S, Zhang S, Yao S, Hadfield S, Melzi S, Lyu S, Li S, Becker S, Golodetz S, Kakanuru S, Choi S, Hu T, Mauthner T, Zhang T, Pridmore T, Santopietro V, Hu W, Li W, Huebner W, Lan X, Wang X, Li X, Li Y, Demiris Y, Wang Y, Qi Y, Yuan Z, Cai Z, Xu Z, He Z, Chi Zet al., 2016, The Visual Object Tracking VOT2016 Challenge Results, 14th European Conference on Computer Vision (ECCV), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 777-823, ISSN: 0302-9743

Conference paper

Lee K, Ognibene D, Chang H, Kim T-K, Demiris Yet al., 2015, STARE: Spatio-Temporal Attention Relocation for multiple structured activities detection, IEEE Transactions on Image Processing, Vol: 24, Pages: 5916-5927, ISSN: 1057-7149

We present a spatio-temporal attention relocation (STARE) method, an information-theoretic approach for efficient detection of simultaneously occurring structured activities. Given multiple human activities in a scene, our method dynamically focuses on the currently most informative activity. Each activity can be detected without complete observation, as the structure of sequential actions plays an important role on making the system robust to unattended observations. For such systems, the ability to decide where and when to focus is crucial to achieving high detection performances under resource bounded condition. Our main contributions can be summarized as follows: 1) information-theoretic dynamic attention relocation framework that allows the detection of multiple activities efficiently by exploiting the activity structure information and 2) a new high-resolution data set of temporally-structured concurrent activities. Our experiments on applications show that the STARE method performs efficiently while maintaining a reasonable level of accuracy.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00333953&person=true&page=4&respub-action=search.html