Below is a list of all relevant publications authored by Robotics Forum members.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JYet al., 2018,

    Context-aware deep feature compression for high-speed visual tracking

    , IEEE Conference on Computer Vision and Pattern Recognition, Publisher: Institute of Electrical and Electronics Engineers, Pages: 479-488, ISSN: 1063-6919

    We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.

  • Journal article
    Moulin-Frier C, Fischer T, Petit M, Pointeau G, Puigbo JY, Pattacini U, Low SC, Camilleri D, Nguyen P, Hoffmann M, Chang HJ, Zambelli M, Mealier AL, Damianou A, Metta G, Prescott TJ, Demiris Y, Dominey PF, Verschure PFMJet al., 2018,

    DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

    , IEEE Transactions on Cognitive and Developmental Systems, Vol: 10, Pages: 1005-1022, ISSN: 2379-8920

    This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

  • Journal article
    Chang HJ, Fischer T, Petit M, Zambelli M, Demiris Yet al., 2018,

    Learning kinematic structure correspondences using multi-order similarities

    , IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 2920-2934, ISSN: 0162-8828

    We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: A Bipedal Robot with Knee-less Legs and Vertical Hip Sliding Motion

    , 21st International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR 2018)
  • Journal article
    Sarabia M, Young N, Canavan K, Edginton T, Demiris Y, Vizcaychipi MPet al., 2018,

    Assistive robotic technology to combat social isolation in acute hospital settings

    , International Journal of Social Robotics, Vol: 10, Pages: 607-620, ISSN: 1875-4791

    Social isolation in hospitals is a well established risk factor for complications such as cognitive decline and depression. Assistive robotic technology has the potential to combat this problem, but first it is critical to investigate how hospital patients react to this technology. In order to address this question, we introduced a remotely operated NAO humanoid robot which conversed, made jokes, played music, danced and exercised with patients in a London hospital. In total, 49 patients aged between 18–100 took part in the study, 7 of whom had dementia. Our results show that a majority of patients enjoyed their interaction with NAO. We also found that age and dementia significantly affect the interaction, whereas gender does not. These results indicate that hospital patients enjoy socialising with robots, opening new avenues for future research into the potential health benefits of a social robotic companion.

  • Journal article
    Saeedi Gharahbolagh S, Bodin B, Wagstaff H, Nisbet A, Nardi L, Mawer J, Melot N, Palomar O, Vespa E, Gorgovan C, Webb A, Clarkson J, Tomusk E, Debrunner T, Kaszyk K, Gonzalez P, Rodchenko A, Riley G, Kotselidis C, Franke B, OBoyle M, Davison A, Kelly P, Lujan M, Furber Set al., 2018,

    Navigating the landscape for real-time localisation and mapping for robotics, virtual and augmented reality

    , Proceedings of the IEEE, Vol: 106, Pages: 2020-2039, ISSN: 0018-9219

    Visual understanding of 3-D environments in real time, at low power, is a huge computational challenge. Often referred to as simultaneous localization and mapping (SLAM), it is central to applications spanning domestic and industrial robotics, autonomous vehicles, and virtual and augmented reality. This paper describes the results of a major research effort to assemble the algorithms, architectures, tools, and systems software needed to enable delivery of SLAM, by supporting applications specialists in selecting and configuring the appropriate algorithm and the appropriate hardware, and compilation pathway, to meet their performance, accuracy, and energy consumption goals. The major contributions we present are: 1) tools and methodology for systematic quantitative evaluation of SLAM algorithms; 2) automated, machine-learning-guided exploration of the algorithmic and implementation design space with respect to multiple objectives; 3) end-to-end simulation tools to enable optimization of heterogeneous, accelerated architectures for the specific algorithmic requirements of the various SLAM algorithmic approaches; and 4) tools for delivering, where appropriate, accelerated, adaptive SLAM solutions in a managed, JIT-compiled, adaptive runtime context.

  • Conference paper
    Fischer T, Chang HJ, Demiris Y, 2018,

    RT-GENE: Real-time eye gaze estimation in natural environments

    , European Conference on Computer Vision, Publisher: Springer Verlag, Pages: 339-357, ISSN: 0302-9743

    In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.

  • Conference paper
    Nguyen P, Fischer T, Chang HJ, Pattacini U, Metta G, Demiris Yet al., 2018,

    Transferring visuomotor learning from simulation to the real world for robotics manipulation tasks

    , IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, Pages: 6667-6674, ISSN: 2153-0866

    Hand-eye coordination is a requirement for many manipulation tasks including grasping and reaching. However, accurate hand-eye coordination has shown to be especially difficult to achieve in complex robots like the iCub humanoid. In this work, we solve the hand-eye coordination task using a visuomotor deep neural network predictor that estimates the arm's joint configuration given a stereo image pair of the arm and the underlying head configuration. As there are various unavoidable sources of sensing error on the physical robot, we train the predictor on images obtained from simulation. The images from simulation were modified to look realistic using an image-to-image translation approach. In various experiments, we first show that the visuomotor predictor provides accurate joint estimates of the iCub's hand in simulation. We then show that the predictor can be used to obtain the systematic error of the robot's joint measurements on the physical iCub robot. We demonstrate that a calibrator can be designed to automatically compensate this error. Finally, we validate that this enables accurate reaching of objects while circumventing manual fine-calibration of the robot.

  • Conference paper
    Chacon Quesada R, Demiris Y, 2018,

    Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances

    , https://www.idiap.ch/workshop/iros2018/files/, IROS 2018 Workshop on Robots for Assisted Living

    In this paper we present a novel augmented reality head mounted display user interface for controlling a robotic wheelchair for people with limited mobility. To lower the cognitive requirements needed to control the wheelchair, we propose integration of a smart wheelchair with an eye-tracking enabled head-mounted display. We propose a novel platform that integrates multiple user interface interaction methods for aiming at and selecting affordances derived by on-board perception capabilities such as laser-scanner readings and cameras. We demonstrate the effectiveness of the approach by evaluating our platform in two realistic scenarios: 1) Door detection, where the affordance corresponds to a Door object and the Go-Through action and 2) People detection, where the affordance corresponds to a Person and the Approach action. To the best of our knowledge, this is the first demonstration of a augmented reality head-mounted display user interface for controlling a smart wheelchair.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    Casualty detection from 3D point cloud data for autonomous ground mobile rescue robots

    , SSRR 2018, Publisher: IEEE

    One of the most important features of mobilerescue robots is the ability to autonomously detect casualties,i.e. human bodies, which are usually lying on the ground. Thispaper proposes a novel method for autonomously detectingcasualties lying on the ground using obtained 3D point-clouddata from an on-board sensor, such as an RGB-D camera ora 3D LIDAR, on a mobile rescue robot. In this method, theobtained 3D point-cloud data is projected onto the detectedground plane, i.e. floor, within the point cloud. Then, thisprojected point cloud is converted into a grid-map that isused afterwards as an input for the algorithm to detecthuman body shapes. The proposed method is evaluated byperforming detections of a human dummy, placed in differentrandom positions and orientations, using an on-board RGB-Dcamera on a mobile rescue robot called ResQbot. To evaluatethe robustness of the casualty detection method to differentcamera angles, the orientation of the camera is set to differentangles. The experimental results show that using the point-clouddata from the on-board RGB-D camera, the proposed methodsuccessfully detects the casualty in all tested body positions andorientations relative to the on-board camera, as well as in alltested camera angles.

  • Conference paper
    Pittiglio G, Kogkas A, Vrielink JO, Mylonas Get al., 2018,

    Dynamic Control of Cable Driven Parallel Robots with Unknown Cable Stiffness: a Joint Space Approach

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 948-955, ISSN: 1050-4729
  • Conference paper
    Vrielink TJCO, Chao M, Darzi A, Mylonas GPet al., 2018,

    ESD CYCLOPS: A new robotic surgical system for GI surgery

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE Computer Soc., Pages: 150-157, ISSN: 1050-4729

    Gastrointestinal (GI) cancers account for 1.5 million deaths worldwide. Endoscopic Submucosal Dissection (ESD) is an advanced therapeutic endoscopy technique with superior clinical outcome due to the minimally invasive and en bloc removal of tumours. In the western world, ESD is seldom carried out, due to its complex and challenging nature. Various surgical systems are being developed to make this therapy accessible, however, these solutions have shown limited operational workspace, dexterity, or low force exertion capabilities. The current paper shows the ESD CYCLOPS system, a bimanual surgical robotic attachment that can be mounted at the end of any flexible endoscope. The system is able to achieve forces of up to 46N, and showed a mean error of 0.217mm during an elliptical tracing task. The workspace and instrument dexterity is shown by pre-clinical ex vivo trials, in which ESD is successfully performed by a GI surgeon. The system is currently undergoing pre-clinical in vivo validation.

  • Conference paper
    Runciman M, Darzi A, Mylonas G, 2018,

    Deployable disposable self-propelling and variable stiffness devices for minimally invasive surgery

    , Conference on New Technologies for Computer/Robot Assisted Surgery
  • Conference paper
    Goncalves Nunes U, Demiris Y, 2018,

    3D motion segmentation of articulated rigid bodies based on RGB-D data

    , British Machine Vision Conference (BMVC 2018), Publisher: British Machine Vision Association (BMVA)

    This paper addresses the problem of motion segmentation of articulated rigid bodiesfrom a single-view RGB-D data sequence. Current methods either perform dense motionsegmentation, and consequently are very computational demanding, or rely on sparse 2Dfeature points, which may not be sufficient to represent the entire scene. In this paper,we advocate the use of 3D semi-dense motion segmentation which also bridges somelimitations of standard 2D methods (e.g. background removal). We cast the 3D motionsegmentation problem into a subspace clustering problem, adding an adaptive spectralclustering that estimates the number of object rigid parts. The resultant method has fewparameters to adjust, takes less time than the temporal length of the scene and requiresno post-processing.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    Casualty detection for mobile rescue robots via ground-projected point clouds

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer, Cham, Pages: 473-475, ISSN: 0302-9743

    In order to operate autonomously, mobile rescue robots needto be able to detect human casualties in disaster situations. In this paper,we propose a novel method for autonomous detection of casualties lyingdown on the ground based on point-cloud data. This data can be obtainedfrom different sensors, such as an RGB-D camera or a 3D LIDAR sensor.The method is based on a ground-projected point-cloud (GPPC) imageto achieve human body shape detection. A preliminary experiment hasbeen conducted using the RANSAC method for floor detection and, theHOG feature and the SVM classifier to detect human body shape. Theresults show that the proposed method succeeds to identify a casualtyfrom point-cloud data in a wide range of viewing angles.

  • Conference paper
    Pardo F, Tavakoli A, Levdik V, Kormushev Pet al., 2018,

    Time limits in reinforcement learning

    , International Conference on Machine Learning, Pages: 4042-4051

    In reinforcement learning, it is common to let anagent interact for a fixed amount of time with itsenvironment before resetting it and repeating theprocess in a series of episodes. The task that theagent has to learn can either be to maximize itsperformance over (i) that fixed period, or (ii) anindefinite period where time limits are only usedduring training to diversify experience. In thispaper, we provide a formal account for how timelimits could effectively be handled in each of thetwo cases and explain why not doing so can causestate-aliasing and invalidation of experience re-play, leading to suboptimal policies and traininginstability. In case (i), we argue that the termi-nations due to time limits are in fact part of theenvironment, and thus a notion of the remainingtime should be included as part of the agent’s in-put to avoid violation of the Markov property. Incase (ii), the time limits are not part of the envi-ronment and are only used to facilitate learning.We argue that this insight should be incorporatedby bootstrapping from the value of the state atthe end of each partial episode. For both cases,we illustrate empirically the significance of ourconsiderations in improving the performance andstability of existing reinforcement learning algo-rithms, showing state-of-the-art results on severalcontrol tasks.

  • Conference paper
    Cully AHR, Demiris Y, 2018,

    Hierarchical behavioral repertoires with unsupervised descriptors

    , Genetic and Evolutionary Computation Conference 2018, Publisher: ACM

    Enabling artificial agents to automatically learn complex, versatile and high-performing behaviors is a long-lasting challenge. This paper presents a step in this direction with hierarchical behavioral repertoires that stack several behavioral repertoires to generate sophisticated behaviors. Each repertoire of this architecture uses the lower repertoires to create complex behaviors as sequences of simpler ones, while only the lowest repertoire directly controls the agent's movements. This paper also introduces a novel approach to automatically define behavioral descriptors thanks to an unsupervised neural network that organizes the produced high-level behaviors. The experiments show that the proposed architecture enables a robot to learn how to draw digits in an unsupervised manner after having learned to draw lines and arcs. Compared to traditional behavioral repertoires, the proposed architecture reduces the dimensionality of the optimization problems by orders of magnitude and provides behaviors with a twice better fitness. More importantly, it enables the transfer of knowledge between robots: a hierarchical repertoire evolved for a robotic arm to draw digits can be transferred to a humanoid robot by simply changing the lowest layer of the hierarchy. This enables the humanoid to draw digits although it has never been trained for this task.

  • Journal article
    Kucukyilmaz A, Demiris Y, 2018,

    Learning shared control by demonstration for personalized wheelchair assistance

    , IEEE Transactions on Haptics, Vol: 11, Pages: 431-442, ISSN: 1939-1412

    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e., in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: a novel bipedal walking robot without knees

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer International Publishing AG, part of Springer Nature, Pages: 471-472, ISSN: 0302-9743

    In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    ResQbot: a mobile rescue robot with immersive teleperception for casualty extraction

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer International Publishing AG, part of Springer Nature, Pages: 209-220, ISSN: 0302-9743

    In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1128&limit=20&page=5&respub-action=search.html Current Millis: 1621271189783 Current Time: Mon May 17 18:06:29 BST 2021