Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

    Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Zajc LČ, Vojír T, Bhat G, Lukežič A, Eldesokey A, Fernández G, García-Martín Á, Iglesias-Arias Á, Alatan AA, González-García A, Petrosino A, Memarmoghadam A, Vedaldi A, Muhič A, He A, Smeulders A, Perera AG, Li B, Chen B, Kim C, Xu C, Xiong C, Tian C, Luo C, Sun C, Hao C, Kim D, Mishra D, Chen D, Wang D, Wee D, Gavves E, Gundogdu E, Velasco-Salido E, Khan FS, Yang F, Zhao F, Li F, Battistone F, De Ath G, Subrahmanyam GRKS, Bastos G, Ling H, Galoogahi HK, Lee H, Li H, Zhao H, Fan H, Zhang H, Possegger H, Li H, Lu H, Zhi H, Li H, Lee H, Chang HJ, Drummond I, Valmadre J, Martin JS, Chahl J, Choi JY, Li J, Wang J, Qi J, Sung J, Johnander J, Henriques J, Choi J, van de Weijer J, Herranz JR, Martínez JM, Kittler J, Zhuang J, Gao J, Grm K, Zhang L, Wang L, Yang L, Rout L, Si L, Bertinetto L, Chu L, Che M, Maresca ME, Danelljan M, Yang MH, Abdelpakey M, Shehata M, Kang Met al., 2019,

    The sixth visual object tracking VOT2018 challenge results

    , European Conference on Computer Vision, Publisher: Springer, Pages: 3-53, ISSN: 0302-9743

    The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (

    Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018,

    Context-aware deep feature compression for high-speed visual tracking

    , IEEE Conference on Computer Vision and Pattern Recognition, Publisher: Institute of Electrical and Electronics Engineers, Pages: 479-488, ISSN: 1063-6919

    We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.

    Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo J-Y, Pattacini U, Ching Low S, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier A-L, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2018,

    DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

    Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2018,

    Learning kinematic structure correspondences using multi-order similarities

    , IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2920-2934, ISSN: 0162-8828

    We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.

    Chang HJ, Demiris Y, 2018,

    Highly Articulated Kinematic Structure Estimation Combining Motion and Skeleton Information

    Fischer T, Demiris Y, 2018,

    A computational model for embodied visual perspective taking: from physical movements to mental simulation

    , Vision Meets Cognition Workshop at CVPR 2018

    To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

    Elsdon J, Demiris Y, 2018,

    Augmented reality for feedback in a shared control spraying task

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers (IEEE), Pages: 1939-1946, ISSN: 1050-4729

    Using industrial robots to spray structures has been investigated extensively, however interesting challenges emerge when using handheld spraying robots. In previous work we have demonstrated the use of shared control of a handheld spraying robot to assist a user in a 3D spraying task. In this paper we demonstrate the use of Augmented Reality Interfaces to increase the user's progress and task awareness. We describe our solutions to challenging calibration issues between the Microsoft Hololens system and a motion capture system without the need for well defined markers or careful alignment on the part of the user. Error relative to the motion capture system was shown to be 10mm after only a 4 second calibration routine. Secondly we outline a logical approach for visualising liquid density for an augmented reality spraying task, this system allows the user to see target regions to complete, areas that are complete and areas that have been overdosed clearly. Finally we produced a user study to investigate the level of assistance that a handheld robot utilising shared control methods should provide during a spraying task. Using a handheld spraying robot with a moving spray head did not aid the user much over simply actuating spray nozzle for them. Compared to manual control the automatic modes significantly reduced the task load experienced by the user and significantly increased the quality of the result of the spraying task, reducing the error by 33-45%.

    Cully A, Demiris Y, 2018,

    Quality and Diversity Optimization: A Unifying Modular Framework

    Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure PFMJet al., 2018,

    iCub-HRI: A Software Framework for Complex Human-Robot Interaction Scenarios on the iCub Humanoid Robot

    , FRONTIERS IN ROBOTICS AND AI, Vol: 5, ISSN: 2296-9144
    Nguyen PDH, Fischer T, Chang HJ, Pahacini U, Metta G, Demiris Yet al., 2018,

    Transferring Visuomotor Learning from Simulation to the Real World for Robotics Manipulation Tasks

    , 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 6667-6674, ISSN: 2153-0858
    Zolotas M, Elsdon J, Demiris Y, 2018,

    Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance

    , 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1823-1829, ISSN: 2153-0858
    Fischer T, Chang HJ, Demiris Y, 2018,

    RT-GENE: Real-time eye gaze estimation in natural environments

    , Pages: 339-357, ISSN: 0302-9743

    © Springer Nature Switzerland AG 2018. In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.

    Choi J, Chang HJ, Yun S, Fischer T, Demiris Y, Choi JYet al., 2017,

    Attentional correlation filter network for adaptive visual tracking

    , IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, ISSN: 1063-6919

    We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.

    Korkinof D, Demiris Y, 2017,

    Multi-task and multi-kernel Gaussian process dynamical systems

    , PATTERN RECOGNITION, Vol: 66, Pages: 190-201, ISSN: 0031-3203
    Zambelli M, Demiris Y, 2017,

    Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations

    Georgiou T, Demiris Y, 2017,

    Adaptive user modelling in car racing games using behavioural and physiological data

    , USER MODELING AND USER-ADAPTED INTERACTION, Vol: 27, Pages: 267-311, ISSN: 0924-1868
    Elsdon J, Demiris Y, 2017,

    Assisted painting of 3D structures using shared control with a hand-held robot

    , 2017 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE
    Yoo Y, Yun S, Chang HJ, Demiris Y, Choi JYet al., 2017,

    Variational Autoencoded Regression: High Dimensional Regression of Visual Data on Complex Manifold

    , 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 2943-2952, ISSN: 1063-6919
    Zhang F, Cully A, Demiris Y, 2017,

    Personalized Robot-assisted Dressing using User Modeling in Latent Spaces

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3603-3610, ISSN: 2153-0858
    Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2016,

    Kinematic structure correspondences via hypergraph matching

    , IEEE Conference on Computer Vision and Pattern Recognition, Publisher: IEEE, ISSN: 1063-6919

    In this paper, we present a novel framework for finding the kinematic structure correspondence between two objects in videos via hypergraph matching. In contrast to prior appearance and graph alignment based matching methods which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Our main contributions can be summarised as follows: (i) casting the kinematic structure correspondence problem into a hypergraph matching problem, incorporating multi-order similarities with normalising weights, (ii) a structural topology similarity measure by a new topology constrained subgraph isomorphism aggregation, (iii) a kinematic correlation measure between pairwise nodes, and (iv) a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on complex articulated synthetic and real data.

    Petit M, Fischer T, Demiris Y, 2016,

    Towards the Emergence of Procedural Memories from Lifelong Multi-Modal Streaming Memories for Cognitive Robots

    , Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IEEE/RSJ IROS

    Various research topics are emerging as the demand for intelligent lifelong interactions between robot and humans increases. Among them, we can find the examination of persistent storage, the continuous unsupervised annotation of memories and the usage of data at high-frequency over long periods of time. We recently proposed a lifelong autobiographical memory architecture tackling some of these challenges, allowing the iCub humanoid robot to 1) create new memories for both actions that are self-executed and observed from humans, 2) continuously annotate these actions in an unsupervised manner, and 3) use reasoning modules to augment these memories a-posteriori. In this paper, we present a reasoning algorithm which generalises the robots’ understanding of actions by finding the point of commonalities with the former ones. In particular, we generated and labelled templates of pointing actions in different directions. This represents a first step towards the emergence of a procedural memory within a long-term autobiographical memory framework for robots.

    Zambelli M, Fischer T, Petit M, Chang HJ, Cully A, Demiris Yet al., 2016,

    Towards Anchoring Self-Learned Representations to Those of Other Agents

    , Workshop on Bio-inspired Social Robot Learning in Home Scenarios IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: Institute of Electrical and Electronics Engineers (IEEE)

    In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences, 2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discoverscorrespondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent. We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step towards the emergence of a mirror neuron like system based on self-learned representations.

    Petit M, Fischer T, Demiris Y, 2016,

    Lifelong Augmentation of Multi-Modal Streaming Autobiographical Memories

    , IEEE Transactions on Cognitive and Developmental Systems, Vol: 8, Pages: 201-213, ISSN: 2379-8920

    Robot systems that interact with humans over extended periods of time will benefit from storing and recalling large amounts of accumulated sensorimotor and interaction data. We provide a principled framework for the cumulative organisation of streaming autobiographical data so that data can be continuously processed and augmented as the processing and reasoning abilities of the agent develop and further interactions with humans take place. As an example, we show how a kinematic structure learning algorithm reasons a-posteriori about the skeleton of a human hand. A partner can be asked to provide feedback about the augmented memories, which can in turn be supplied to the reasoning processes in order to adapt their parameters. We employ active, multi-modal remembering, so the robot as well as humans can gain insights of both the original and augmented memories. Our framework is capable of storing discrete and continuous data in real-time. The data can cover multiple modalities and several layers of abstraction (e.g. from raw sound signals over sentences to extracted meanings). We show a typical interaction with a human partner using an iCub humanoid robot. The framework is implemented in a platform-independent manner. In particular, we validate its multi platform capabilities using the iCub, Baxter and NAO robots. We also provide an interface to cloud based services, which allow automatic annotation of episodes. Our framework is geared towards the developmental robotics community, as it 1) provides a variety of interfaces for other modules, 2) unifies previous works on autobiographical memory, and 3) is licensed as open source software.

    Gao Y, Chang HJ, Demiris Y, 2016,

    Personalised assistive dressing by humanoid robots using multi-modal information

    , Workshop on Human-Robot Interfaces for Enhanced Physical Interactions at ICRA

    In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement rangemodels and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting tothe force resistances.

    Fischer T, Demiris Y, 2016,

    Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments

    , 2016 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3309-3316

    Perspective taking enables humans to imagine the world from another viewpoint. This allows reasoning about the state of other agents, which in turn is used to more accurately predict their behavior. In this paper, we equip an iCub humanoid robot with the ability to perform visuospatial perspective taking (PT) using a single depth camera mounted above the robot. Our approach has the distinct benefit that the robot can be used in unconstrained environments, as opposed to previous works which employ marker-based motion capture systems. Prior to and during the PT, the iCub learns the environment, recognizes objects within the environment, and estimates the gaze of surrounding humans. We propose a new head pose estimation algorithm which shows a performance boost by normalizing the depth data to be aligned with the human head. Inspired by psychological studies, we employ two separate mechanisms for the two different types of PT. We implement line of sight tracing to determine whether an object is visible to the humans (level 1 PT). For more complex PT tasks (level 2 PT), the acquired point cloud is mentally rotated, which allows algorithms to reason as if the input data was acquired from an egocentric perspective. We show that this can be used to better judge where object are in relation to the humans. The multifaceted improvements to the PT pipeline advance the state of the art, and move PT in robots to markerless, unconstrained environments.

    Petit M, Demiris Y, 2016,

    Hierarchical Action Learning by Instruction Through Interactive Grounding of Body Parts and Proto-actions

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3375-3382, ISSN: 1050-4729
    Gao Y, Chang HJ, Demiris Y, 2016,

    Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 4398-4403
    Zambelli M, Demiris Y, 2016,

    Multimodal Imitation using Self-learned Sensorimotor Representations

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3953-3958
    Georgiou T, Demiris Y, 2016,

    Personalised Track Design in Car Racing Games

    , IEEE Conference on Computational Intelligence and Games (CIG), Publisher: IEEE, ISSN: 2325-4270
    Choi J, Chang HJ, Jeong J, Demiris Y, Choi JYet al., 2016,

    Visual Tracking Using Attention-Modulated Disintegration and Integration

    , 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4321-4330, ISSN: 1063-6919
    Lee K, Ognibene D, Chang HJ, Kim T-K, Demiris Yet al., 2015,

    STARE: Spatio-Temporal Attention Relocation for Multiple Structured Activities Detection

    Zambelli M, Demiris Y, 2015,

    Online Ensemble Learning of Sensorimotor Contingencies

    , Workshop on Sensorimotor Contingencies For Robotics at IROS

    Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter.

    Sarabia M, Lee K, Demiris Y, 2015,

    Towards a Synchronised Grammars Framework for Adaptive Musical Human-Robot Collaboration

    , IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Publisher: IEEE, Pages: 715-721

    We present an adaptive musical collaboration framework for interaction between a human and a robot. The aim of our work is to develop a system that receives feedback from the user in real time and learns the music progression style of the user over time. To tackle this problem, we represent a song as a hierarchically structured sequence of music primitives. By exploiting the sequential constraints of these primitives inferred from the structural information combined with user feedback, we show that a robot can play music in accordance with the user’s anticipated actions. We use Stochastic Context-Free Grammars augmented with the knowledge of the learnt user’s preferences.We provide synthetic experiments as well as a pilot study with a Baxter robot and a tangible music table. The synthetic results show the synchronisation and adaptivity features of our framework and the pilot study suggest these are applicable to create an effective musical collaboration experience.

    Kucukyilmaz A, Demiris Y, 2015,

    One-shot assistance estimation from expert demonstrations for a shared control wheelchair system

    , International Symposium on Robot and Human Interactive Communication (RO-MAN), Publisher: IEEE, Pages: 438-443

    An emerging research problem in the field of assistive robotics is the design of methodologies that allow robots to provide human-like assistance to the users. Especially within the rehabilitation domain, a grand challenge is to program a robot to mimic the operation of an occupational therapist, intervening with the user when necessary so as to improve the therapeutic power of the assistive robotic system. We propose a method to estimate assistance policies from expert demonstrations to present human-like intervention during navigation in a powered wheelchair setup. For this purpose, we constructed a setting, where a human offers assistance to the user over a haptic shared control system. The robot learns from human assistance demonstrations while the user is actively driving the wheelchair in an unconstrained environment. We train a Gaussian process regression model to learn assistance commands given past and current actions of the user and the state of the environment. The results indicate that the model can estimate human assistance after only a single demonstration, i.e. in one-shot, so that the robot can help the user by selecting the appropriate assistance in a human-like fashion.

    Georgiou T, Demiris Y, 2015,

    Predicting car states through learned models of vehicle dynamics and user behaviours

    , Intelligent Vehicles Symposium (IV), Publisher: IEEE, Pages: 1240-1245

    The ability to predict forthcoming car states is crucial for the development of smart assistance systems. Forthcoming car states do not only depend on vehicle dynamics but also on user behaviour. In this paper, we describe a novel prediction methodology by combining information from both sources - vehicle and user - using Gaussian Processes. We then apply this method in the context of high speed car racing. Results show that the forthcoming position and speed of the car can be predicted with low Root Mean Square Error through the trained model.

    Soh H, Demiris Y, 2015,

    Spatio-Temporal Learning With the Online Finite and Infinite Echo-State Gaussian Processes

    , IEEE Transactions on Neural Networks and Learning Systems, Vol: 26, Pages: 522-536, ISSN: 2162-237X

    Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair.

    Kormushev P, Demiris Y, Caldwell DG, 2015,

    Encoderless Position Control of a Two-Link Robot Manipulator

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 943-949, ISSN: 1050-4729
    Gao Y, Chang HJ, Demiris Y, 2015,

    User Modelling for Personalised Dressing Assistance by Humanoid Robots

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1840-1845, ISSN: 2153-0858
    Chang HJ, Demiris Y, 2015,

    Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information

    , IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 3138-3146, ISSN: 1063-6919
    Kormushev P, Demiris Y, Caldwell DG, 2015,

    Kinematic-free Position Control of a 2-DOF Planar Robot Arm

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 5518-5525, ISSN: 2153-0858
    Wu Y, Su Y, Demiris Y, 2014,

    A morphable template framework for robot learning by demonstration: Integrating one-shot and incremental learning approaches

    , Robotics and Autonomous Systems, Vol: 62, Pages: 1517-1530

    Robot learning by demonstration is key to bringing robots into daily social environments to interact with and learn from human and other agents. However, teaching a robot to acquire new knowledge is a tedious and repetitive process and often restrictive to a specific setup of the environment. We propose a template-based learning framework for robot learning by demonstration to address both generalisation and adaptability. This novel framework is based upon a one-shot learning model integrated with spectral clustering and an online learning model to learn and adapt actions in similar scenarios. A set of statistical experiments is used to benchmark the framework components and shows that this approach requires no extensive training for generalisation and can adapt to environmental changes flexibly. Two real-world applications of an iCub humanoid robot playing the tic-tac-toe game and soldering a circuit board are used to demonstrate the relative merits of the framework.

    Soh H, Demiris Y, 2014,

    Incrementally Learning Objects by Touch: Online Discriminative and Generative Models for Tactile-Based Recognition

    , IEEE Transactions on Haptics, Vol: 7, Pages: 512-525, ISSN: 1939-1412

    Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/ palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate “early” classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.

    Ros R, Baroni I, Demiris Y, 2014,

    Adaptive human-robot interaction in sensorimotor task instruction: From human to robot dance tutors

    , Robotics and Autonomous Systems, Vol: 62, Pages: 707-720, ISSN: 1872-793X

    We explore the potential for humanoid robots to interact with children in a dance activity. In this context, the robot plays the role of an instructor to guide the child through several dance moves to learn a dance phrase. We participated in 30 dance sessions in schools to study human–human interaction between children and a human dance teacher, and to identify the applied methodologies. Based on the strategies observed, both social and task-dependent, we implemented a robotic system capable of autonomously instructing dance sequences to children while displaying basic social cues to engage the child in the task. Experiments were performed in a hospital with the Nao robot interacting with 12 children through multiple encounters, when possible (18 sessions, 236 min). Observational analysis through video recordings and survey evaluations were used to assess the quality of interaction. Moreover, we introduce an involvement measure based on the aggregation of observed behavioral cues to assess the level of interest in the interaction through time. The analysis revealed high levels of involvement, while highlighting the need for further research into social engagement and adaptation with robots over repeated sessions.

    Ros R, Coninx A, Demiris Y, Patsis G, Enescu V, Sahli Het al., 2014,

    Behavioral Accommodation towards a Dance Robot Tutor

    , International Conference on Human-Robot Interaction, Publisher: ACM/IEEE, Pages: 278-279

    We report first results on children adaptive behavior towards a dance tutoring robot. We can observe that children behavior rapidly evolves through few sessions in order to accommodate with the robotic tutor rhythm and instructions.

    Demiris Y, Aziz-Zadeh L, Bonaiuto J, 2014,

    Information Processing in the Mirror Neuron System in Primates and Machines

    , Neuroinformatics, Vol: 12, Pages: 63-91, ISSN: 1539-2791

    The mirror neuron system in primates matches observations of actions with the motor representations used for their execution, and is a topic of intense research and debate in biological and computational disciplines. In robotics, models of this system have been used for enabling robots to imitate and learn how to perform tasks from human demonstrations. Yet, existing computational and robotic models of these systems are found in multiple levels of description, and although some models offer plausible explanations and testable predictions, the difference in the granularity of the experimental setups, methodologies, computational structures and selected modeled data make principled meta-analyses, common in other fields, difficult. In this paper, we adopt an interdisciplinary approach, using the BODB integrated environment in order to bring together several different but complementary computational models, by functionally decomposing them into brain operating principles (BOPs) which each capture a limited subset of the model’s functionality. We then explore links from these BOPs to neuroimaging and neurophysiological data in order to pinpoint complementary and conflicting explanations and compare predictions against selected sets of neurobiological data. The results of this comparison are used to interpret mirror system neuroimaging results in terms of neural network activity, evaluate the biological plausibility of mirror system models, and suggest new experiments that can shed light on the neural basis of mirror systems.

    Su Y, Dong W, Wu Y, Du Z, Demiris Yet al., 2014,

    Increasing the Accuracy and the Repeatability of Position Control for Micromanipulations Using Heteroscedastic Gaussian Processes

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4692-4698, ISSN: 1050-4729
    Lee K, Su Y, Kim T-K, Demiris Yet al., 2013,

    A syntactic approach to robot imitation learning using probabilistic activity grammars

    , ROBOTICS AND AUTONOMOUS SYSTEMS, Vol: 61, Pages: 1323-1334, ISSN: 0921-8890
    Korkinof D, Demiris Y, 2013,

    Online Quantum Mixture Regression for Trajectory Learning by Demonstration

    , IROS 2013, Publisher: IEEE, Pages: 3222-3229

    In this work, we present the online Quantum Mixture Model (oQMM), which combines the merits of quan- tum mechanics and stochastic optimization. More specifically it allows for quantum effects on the mixture states, which in turn become a superposition of conventional mixture states. We propose an efficient stochastic online learning algorithm based on the online Expectation Maximization (EM), as well as a generation and decay scheme for model components. Our method is suitable for complex robotic applications, where data is abundant or where we wish to iteratively refine our model and conduct predictions during the course of learning. With a synthetic example, we show that the algorithm can achieve higher numerical stability. We also empirically demonstrate the efficacy of our method in well-known regression benchmark datasets. Under a trajectory Learning by Demonstration setting we employ a multi-shot learning application in joint angle space, where we observe higher quality of learning and reproduction. We compare against popular and well-established methods, widely adopted across the robotics community.

    Korkinof D, Demiris Y, 2013,

    Online quantum mixture regression for trajectory learning by demonstration

    , 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Publisher: IEEE
    Soh H, Demiris Y, 2013,

    When and how to help: An iterative probabilistic model for learning assistance by demonstration

    , International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3230-3236, ISSN: 2153-0858

    Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance — using both simulation and a real-world smart wheelchair platform — demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=559&limit=50&respub-action=search.html Current Millis: 1558355798629 Current Time: Mon May 20 13:36:38 BST 2019