174 results found
Cully A, Demiris Y, 2018, Quality and Diversity Optimization: A Unifying Modular Framework, IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, Vol: 22, Pages: 245-259, ISSN: 1089-778X
Fischer T, Puigbo J-Y, Camilleri D, et al., 2018, iCub-HRI: A Software Framework for Complex Human-Robot Interaction Scenarios on the iCub Humanoid Robot, FRONTIERS IN ROBOTICS AND AI, Vol: 5, ISSN: 2296-9144
Chang HJ, Demiris Y, 2017, Highly Articulated Kinematic Structure Estimation combining Motion and Skeleton Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, Pages: 1-1, ISSN: 0162-8828
Chang HJ, Fischer T, Petit M, et al., 2017, Learning Kinematic Structure Correspondences Using Multi-Order Similarities, IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN: 0162-8828
IEEE We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.
Choi J, Chang HJ, Yun S, et al., 2017, Attentional Correlation Filter Network for Adaptive Visual Tracking, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4828-4837, ISSN: 1063-6919
Elsdon J, Demiris Y, 2017, Assisted painting of 3D structures using shared control with a hand-held robot, IEEE International Conference on Robotics and Automation, Publisher: IEEE
Abstract— We present a shared control method of painting3D geometries, using a handheld robot which has a singleautonomously controlled degree of freedom. The user scansthe robot near to the desired painting location, the singlemovement axis moves the spray head to achieve the requiredpaint distribution. A simultaneous simulation of the sprayingprocedure is performed, giving an open loop approximationof the current state of the painting. An online prediction ofthe best path for the spray nozzle actuation is calculated ina receding horizon fashion. This is calculated by producing amap of the paint required in the 2D space defined by nozzleposition on the gantry and the time into the future. A directedgraph then extracts its edge weights from this paint density mapand Dijkstra’s algorithm is then used to find the candidate forthe most effective path. Due to the heavy parallelisation of thisapproach and the majority of the calculations taking place on aGPU we can run the prediction loop in 32.6ms for a predictionhorizon of 1 second, this approach is computationally efficient,outperforming a greedy algorithm. The path chosen by theproposed method on average chooses a path in the top 15%of all paths as calculated by exhaustive testing. This approachenables development of real time path planning for assistedspray painting onto complicated 3D geometries. This methodcould be applied to applications such as assistive painting forpeople with disabilities, or accurate placement of liquid whenlarge scale positioning of the head is too expensive.
Georgiou T, Demiris Y, 2017, Adaptive user modelling in car racing games using behavioural and physiological data, USER MODELING AND USER-ADAPTED INTERACTION, Vol: 27, Pages: 267-311, ISSN: 0924-1868
Korkinof D, Demiris Y, 2017, Multi-task and multi-kernel Gaussian process dynamical systems, PATTERN RECOGNITION, Vol: 66, Pages: 190-201, ISSN: 0031-3203
Moulin-Frier C, Fischer T, Petit M, et al., 2017, DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self, IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920
IEEE This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.
Yoo Y, Yun S, Chang HJ, et al., 2017, Variational Autoencoded Regression: High Dimensional Regression of Visual Data on Complex Manifold, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 2943-2952, ISSN: 1063-6919
Zambelli M, Demiris Y, 2017, Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 9, Pages: 113-126, ISSN: 2379-8920
Zhang F, Cully A, Demiris Y, 2017, Personalized Robot-assisted Dressing using User Modeling in Latent Spaces, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3603-3610, ISSN: 2153-0858
Chang HJ, Fischer T, Petit M, et al., 2016, Kinematic Structure Correspondences via Hypergraph Matching, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4216-4225, ISSN: 1063-6919
Choi J, Chang HJ, Jeong J, et al., 2016, Visual Tracking Using Attention-Modulated Disintegration and Integration, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4321-4330, ISSN: 1063-6919
Coninx A, Baxter P, Oleari E, et al., 2016, Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users, JOURNAL OF HUMAN-ROBOT INTERACTION, Vol: 5, Pages: 32-67, ISSN: 2163-0364
Fischer T, Demiris Y, 2016, Markerless Perspective Taking for Humanoid Robots in Unconstrained Environments, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3309-3316, ISSN: 1050-4729
Gao Y, Chang HJ, Demiris Y, 2016, Personalised assistive dressing by humanoid robots using multi-modal information, Workshop on Human-Robot Interfaces for Enhanced Physical Interactions at ICRA
In this paper, we present an approach to enable a humanoid robot to provide personalised dressing assistance for human users using multi-modal information. A depth sensor is mounted on top of the robot to provide visual information, and the robot end effectors are equipped with force sensors to provide haptic information. We use visual information to model the movement range of human upper-body parts. The robot plans the dressing motions using the movement rangemodels and real-time human pose. During assistive dressing, the force sensors are used to detect external force resistances. We present how the robot locally adjusts its motions based on the detected forces. In the experiments we show that the robot can assist human to wear a sleeveless jacket while reacting tothe force resistances.
Gao Y, Chang HJ, Demiris Y, 2016, Iterative Path Optimisation for Personalised Dressing Assistance using Vision and Force Information, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 4398-4403
Georgiou T, Demiris Y, 2016, Personalised Track Design in Car Racing Games, IEEE Conference on Computational Intelligence and Games (CIG), Publisher: IEEE, ISSN: 2325-4270
Kristan M, Leonardis A, Matas J, et al., 2016, The Visual Object Tracking VOT2016 Challenge Results, 14th European Conference on Computer Vision (ECCV), Publisher: SPRINGER INT PUBLISHING AG, Pages: 777-823, ISSN: 0302-9743
Petit M, Demiris Y, 2016, Hierarchical Action Learning by Instruction Through Interactive Grounding of Body Parts and Proto-actions, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3375-3382, ISSN: 1050-4729
Petit M, Fischer T, Demiris Y, 2016, Lifelong Augmentation of Multimodal Streaming Autobiographical Memories, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 201-213, ISSN: 2379-8920
Petit M, Fischer T, Demiris Y, 2016, Towards the Emergence of Procedural Memories from Lifelong Multi-Modal Streaming Memories for Cognitive Robots, Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IEEE/RSJ IROS
Various research topics are emerging as the demand for intelligent lifelong interactions between robot and humans increases. Among them, we can find the examination of persistent storage, the continuous unsupervised annotation of memories and the usage of data at high-frequency over long periods of time. We recently proposed a lifelong autobiographical memory architecture tackling some of these challenges, allowing the iCub humanoid robot to 1) create new memories for both actions that are self-executed and observed from humans, 2) continuously annotate these actions in an unsupervised manner, and 3) use reasoning modules to augment these memories a-posteriori. In this paper, we present a reasoning algorithm which generalises the robots’ understanding of actions by finding the point of commonalities with the former ones. In particular, we generated and labelled templates of pointing actions in different directions. This represents a first step towards the emergence of a procedural memory within a long-term autobiographical memory framework for robots.
Ribes A, Cerquides J, Demiris Y, et al., 2016, Active Learning of Object and Body Models with Time Constraints on a Humanoid Robot, IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, Vol: 8, Pages: 26-41, ISSN: 2379-8920
Ros R, Oleari E, Pozzi C, et al., 2016, A Motivational Approach to Support Healthy Habits in Long-term Child-Robot Interaction, International Journal of Social Robotics, Vol: 8, Pages: 599-617, ISSN: 1875-4791
Zambelli M, Demiris Y, 2016, Multimodal Imitation using Self-learned Sensorimotor Representations, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3953-3958
Zambelli M, Fischer T, Petit M, et al., 2016, Towards Anchoring Self-Learned Representations to Those of Other Agents, Workshop on Bio-inspired Social Robot Learning in Home Scenarios IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: Institute of Electrical and Electronics Engineers (IEEE)
In the future, robots will support humans in their every day activities. One particular challenge that robots will face is understanding and reasoning about the actions of other agents in order to cooperate effectively with humans. We propose to tackle this using a developmental framework, where the robot incrementally acquires knowledge, and in particular 1) self-learns a mapping between motor commands and sensory consequences, 2) rapidly acquires primitives and complex actions by verbal descriptions and instructions from a human partner, 3) discoverscorrespondences between the robots body and other articulated objects and agents, and 4) employs these correspondences to transfer the knowledge acquired from the robots point of view to the viewpoint of the other agent. We show that our approach requires very little a-priori knowledge to achieve imitation learning, to find correspondent body parts of humans, and allows taking the perspective of another agent. This represents a step towards the emergence of a mirror neuron like system based on self-learned representations.
Chang HJ, Demiris Y, 2015, Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 3138-3146, ISSN: 1063-6919
Gao Y, Chang HJ, Demiris Y, 2015, User Modelling for Personalised Dressing Assistance by Humanoid Robots, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1840-1845, ISSN: 2153-0858
Georgiou T, Demiris Y, 2015, Predicting car states through learned models of vehicle dynamics and user behaviours, Intelligent Vehicles Symposium (IV), Publisher: IEEE, Pages: 1240-1245
The ability to predict forthcoming car states is crucial for the development of smart assistance systems. Forthcoming car states do not only depend on vehicle dynamics but also on user behaviour. In this paper, we describe a novel prediction methodology by combining information from both sources - vehicle and user - using Gaussian Processes. We then apply this method in the context of high speed car racing. Results show that the forthcoming position and speed of the car can be predicted with low Root Mean Square Error through the trained model.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.