Below is a list of all relevant publications authored by Robotics Forum members.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Baron N, Philippides A, Rojas N, 2018,

    A geometric method of singularity avoidance for kinematically redundant planar parallel robots

    , 16th International Symposium on Advances in Robot Kinematics, Publisher: Springer, Pages: 187-194

    Methods for avoiding singularities of closed-loop robot mechanisms have been traditionally based on the value of the determinant or the condition number of the Jacobian. A major drawback of these standard techniques is that the closeness of a robot configuration to a singularity lacks geometric, physical interpretation, thus implying that it is uncertain how changes in the robot pose actually move further away the mechanism from such a problematic configuration. This paper presents a geometric approach of singularity avoidance for kinematically redundant planar parallel robots that eliminates the disadvantages of Jacobian-based techniques. The proposed method, which is based on the properties of instantaneous centres of rotation, defines a mathematical distance to a singularity and provides a reliable way of moving the robot further from a singular configuration without changing the pose of the end-effector. The approach is demonstrated on an example robot mechanism and the reciprocal of the condition number of the Jacobian is used to show its advantages.

  • Conference paper
    Fischer T, Demiris Y, 2018,

    A computational model for embodied visual perspective taking: from physical movements to mental simulation

    , Vision Meets Cognition Workshop at CVPR 2018

    To understand people and their intentions, humans have developed the ability to imagine their surroundings from another visual point of view. This cognitive ability is called perspective taking and has been shown to be essential in child development and social interactions. However, the precise cognitive mechanisms underlying perspective taking remain to be fully understood. Here we present a computa- tional model that implements perspective taking as a mental simulation of the physical movements required to step into the other point of view. The visual percept after each mental simulation step is estimated using a set of forward models. Based on our experimental results, we propose that a visual attention mechanism explains the response times reported in human visual perspective taking experiments. The model is also able to generate several testable predictions to be explored in further neurophysiological studies.

  • Conference paper
    Bodin B, Nardi L, Wagstaff H, Kelly PHJ, O'Boyle Met al., 2018,

    Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications

    , Pages: 123-124

    Simultaneous Localisation And Mapping (SLAM) is a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is particularly true when it comes to evaluate the potential trade-offs between computation speed, accuracy, and power consumption. SLAMBench is a benchmarking framework to evaluate existing and future SLAM systems, both open and closed source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. SLAMBench is a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption across SLAM systems. In this poster we give an overview of SLAMBench and in particular we show how this framework can be used within Design Space Exploration and large-scale performance evaluation on mobile phones.

  • Conference paper
    Bodin B, Wagstaff H, Saeedi S, Nardi L, Vespa E, Mawer J, Nisbet A, Lujan M, Furber S, Davison AJ, Kelly PHJ, O'Boyle MFPet al., 2018,

    SLAMBench2: multi-objective head-to-head benchmarking for visual SLAM

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3637-3644, ISSN: 1050-4729

    SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and val-idatable experimental research to investigate trade-offs across SLAM systems.

  • Conference paper
    Elsdon J, Demiris Y, 2018,

    Augmented reality for feedback in a shared control spraying task

    , IEEE International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers (IEEE), Pages: 1939-1946, ISSN: 1050-4729

    Using industrial robots to spray structures has been investigated extensively, however interesting challenges emerge when using handheld spraying robots. In previous work we have demonstrated the use of shared control of a handheld spraying robot to assist a user in a 3D spraying task. In this paper we demonstrate the use of Augmented Reality Interfaces to increase the user's progress and task awareness. We describe our solutions to challenging calibration issues between the Microsoft Hololens system and a motion capture system without the need for well defined markers or careful alignment on the part of the user. Error relative to the motion capture system was shown to be 10mm after only a 4 second calibration routine. Secondly we outline a logical approach for visualising liquid density for an augmented reality spraying task, this system allows the user to see target regions to complete, areas that are complete and areas that have been overdosed clearly. Finally we produced a user study to investigate the level of assistance that a handheld robot utilising shared control methods should provide during a spraying task. Using a handheld spraying robot with a moving spray head did not aid the user much over simply actuating spray nozzle for them. Compared to manual control the automatic modes significantly reduced the task load experienced by the user and significantly increased the quality of the result of the spraying task, reducing the error by 33-45%.

  • Journal article
    Miyashita K, Oude Vrielink T, Mylonas G, 2018,

    A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 659-669, ISSN: 1861-6429

    PURPOSE: Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. METHODS: A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. RESULTS: 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. CONCLUSION: The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  • Journal article
    Matheson E, Secoli R, Burrows C, Leibinger A, Rodriguez y Baena Fet al., 2018,

    Cyclic motion control for programmable bevel-tip needles to reduce tissue deformation

    , Journal of Medical Robotics Research, Vol: 4, ISSN: 2424-905X

    Robotic-assisted steered needles aim to accurately control the deflection of the flexible needle’s tip to achieve accurate path following. In doing so, they can decrease trauma to the patient, by avoiding sensitive regions while increasing placement accuracy. This class of needle presents more complicated kinematics compared to straight needles, which can be exploited to produce specific motion profiles via careful controller design and tuning. Motion profiles can be optimized to minimize certain conditions such as maximum tissue deformation and target migration, which was the goal of the formalized cyclic, low-level controller for a Programmable Bevel-tip Needle (PBN) presented in this work. PBNs are composed of a number of interlocked segments that are able to slide with respect to one another. Producing a controlled, desired offset of the tip geometry leads to the corresponding desired curvature of the PBN, and hence desired path trajectory of the system. Here, we propose a cyclical actuation strategy, where the tip configuration is achieved over a number of reciprocal motion cycles, which we hypothesize will reduce tissue deformation during the insertion process. A series of in vitro, planar needle insertion experiments are performed in order to compare the cyclic controller performance with the previously used direct push controller, in terms of targeting accuracy and tissue deformation. It is found that there is no significant difference between the target tracking performance of the controllers, but a significant decrease in axial tissue deformation when using the cyclic controller.

  • Conference paper
    Avila Rencoret FB, Mylonas G, Elson D, 2018,

    Robotic wide-field optical biopsy endoscopy

    , OSA Biophotonics Congress 2018, Publisher: OSA publishing

    This paper describes a novel robotic framework for wide-field optical biopsy endoscopy, characterizes in vitro its spatial and spectral resolution, real time hyperspectral tissue classification, and demonstrates its feasibility on fresh porcine cadaveric colon.

  • Journal article
    Vespa E, Nikolov N, Grimm M, Nardi L, Kelly PH, Leutenegger Set al., 2018,

    Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping

    , IEEE Robotics and Automation Letters, Vol: 3, Pages: 1144-1151, ISSN: 2377-3766

    We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10-40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT*).

  • Journal article
    Fischer T, Puigbo J-Y, Camilleri D, Nguyen PDH, Moulin-Frier C, Lallee S, Metta G, Prescott TJ, Demiris Y, Verschure Pet al., 2018,

    iCub-HRI: A software framework for complex human-robot interaction scenarios on the iCub humanoid robot

    , Frontiers in Robotics and AI, Vol: 5, Pages: 1-9, ISSN: 2296-9144

    Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    ResQbot: A mobile rescue robot for casualty extraction

    , 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2018), Publisher: Association for Computing Machinery, Pages: 239-240

    Performing search and rescue missions in disaster-struck environments is challenging. Despite the advances in the robotic search phase of the rescue missions, few works have been focused on the physical casualty extraction phase. In this work, we propose a mobile rescue robot that is capable of performing a safe casualty extraction routine. To perform this routine, this robot adopts a loco-manipulation approach. We have designed and built a mobile rescue robot platform called ResQbot as a proof of concept of the proposed system. We have conducted preliminary experiments using a sensorised human-sized dummy as a victim, to confirm that the platform is capable of performing a safe casualty extraction procedure.

  • Conference paper
    Nica A, Vespa E, González de Aledo P, Kelly PHJet al., 2018,

    Investigating automatic vectorization for real-time 3D scene understanding

    Simultaneous Localization And Mapping (SLAM) is the problem of building a representation of a geometric space while simultaneously estimating the observer’s location within the space. While this seems to be a chicken-and-egg problem, several algorithms have appeared in the last decades that approximately and iteratively solve this problem. SLAM algorithms are tailored to the available resources, hence aimed at balancing the precision of the map with the constraints that the computational platform imposes and the desire to obtain real-time results. Working with KinectFusion, an established SLAM implementation, we explore in this work the vectorization opportunities present in this scenario, with the goal of using the CPU to its full potential. Using ISPC, an automatic vectorization tool, we produce a partially vectorized version of KinectFusion. Along the way we explore a number of optimization strategies, among which tiling to exploit ray-coherence and outer loop vectorization, obtaining up to 4x speed-up over the baseline on an 8-wide vector machine.

  • Conference paper
    Escribano Macias J, Angeloudis P, Ochieng W, 2018,

    AIAA Integrated Trajectory-Location-Routing for Rapid Humanitarian Deliveries using Unmanned Aerial Vehicles

    , 2018 Aviation Technology, Integration, and Operations Conference
  • Conference paper
    Avila Rencoret FB, Mylonas GP, Elson D, 2018,

    Robotic Wide-Field Optical Biopsy Imaging For Flexible Endoscopy

    , 26th International Congress of the European Association for Endoscopic Surgery (EAES)
  • Conference paper
    Tavakoli A, Pardo F, Kormushev P, 2018,

    Action branching architectures for deep reinforcement learning

    , AAAI 2018, Publisher: AAAI

    Discrete-action algorithms have been central to numerousrecent successes of deep reinforcement learning. However,applying these algorithms to high-dimensional action tasksrequires tackling the combinatorial increase of the numberof possible actions with the number of action dimensions.This problem is further exacerbated for continuous-actiontasks that require fine control of actions via discretization.In this paper, we propose a novel neural architecture fea-turing a shared decision module followed by several net-workbranches, one for each action dimension. This approachachieves a linear increase of the number of network outputswith the number of degrees of freedom by allowing a level ofindependence for each individual action dimension. To illus-trate the approach, we present a novel agent, called Branch-ing Dueling Q-Network (BDQ), as a branching variant ofthe Dueling Double Deep Q-Network (Dueling DDQN). Weevaluate the performance of our agent on a set of challeng-ing continuous control tasks. The empirical results show thatthe proposed agent scales gracefully to environments with in-creasing action dimensionality and indicate the significanceof the shared decision module in coordination of the dis-tributed action branches. Furthermore, we show that the pro-posed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic PolicyGradient (DDPG).

  • Conference paper
    Elson D, Avila Rencoret F, Mylonas G, 2018,

    Robotic Wide-Field Optical Biopsy Imaging for Flexible Endoscopy (Gerhard Buess Technology Award)

    , 26th Annual International EAES Congress
  • Conference paper
    Zhao M, Oude Vrielink T, Elson D, Mylonas Get al., 2018,

    Endoscopic TORS-CYCLOPS: A Novel Cable-driven Parallel Robot for Transoral Laser Surgery

    , 26th Annual International EAES Congress
  • Book chapter
    Porta JM, Rojas N, Thomas F, 2018,

    Distance geometry in active structures

    , Mechatronics for Cultural Heritage and Civil Engineering, Editors: Ottaviano, Pelliccio, Gattulli, Publisher: Springer, Pages: 115-136

    Distance constraints are an emerging formulation that offers intuitive geometrical interpretation of otherwise complex problems. The formulation can be applied in problems such as position and singularity analysis and path planning of mechanisms and structures. This paper reviews the recent advances in distance geometry, providing a unified view of these apparently disparate problems. This survey reviews algebraic and numerical techniques, and is, to the best of our knowledge, the first attempt to summarize the different approaches relating to distance-based formulations.

  • Conference paper
    Kanajar P, Caldwell DG, Kormushev P, 2017,

    Climbing over large obstacles with a humanoid robot via multi-contact motion planning

    , IEEE RO-MAN 2017: 26th IEEE International Symposium on Robot and Human Interactive Communication, Publisher: IEEE, Pages: 1202-1209

    Incremental progress in humanoid robot locomotion over the years has achieved important capabilities such as navigation over flat or uneven terrain, stepping over small obstacles and climbing stairs. However, the locomotion research has mostly been limited to using only bipedal gait and only foot contacts with the environment, using the upper body for balancing without considering additional external contacts. As a result, challenging locomotion tasks like climbing over large obstacles relative to the size of the robot have remained unsolved. In this paper, we address this class of open problems with an approach based on multi-body contact motion planning guided through physical human demonstrations. Our goal is to make the humanoid locomotion problem more tractable by taking advantage of objects in the surrounding environment instead of avoiding them. We propose a multi-contact motion planning algorithm for humanoid robot locomotion which exploits the whole-body motion and multi-body contacts including both the upper and lower body limbs. The proposed motion planning algorithm is applied to a challenging task of climbing over a large obstacle. We demonstrate successful execution of the climbing task in simulation using our multi-contact motion planning algorithm initialized via a transfer from real-world human demonstrations of the task and further optimized.

  • Conference paper
    Zhang F, Cully A, Demiris YIANNIS, 2017,

    Personalized Robot-assisted Dressing using User Modeling in Latent Spaces

    , 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

    Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1128&limit=20&page=6&respub-action=search.html Current Millis: 1623508634939 Current Time: Sat Jun 12 15:37:14 BST 2021