Below is a list of all relevant publications authored by Robotics Forum members.
- Showing results for:
- Reset all filters
Conference paperBodin B, Wagstaff H, Saeedi S, et al., 2018,
SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and val-idatable experimental research to investigate trade-offs across SLAM systems.
Journal articleMiyashita K, Oude Vrielink T, Mylonas G, 2018,
A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy, International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 659-669, ISSN: 1861-6429
PURPOSE: Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. METHODS: A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. RESULTS: 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. CONCLUSION: The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.
Journal articleMatheson E, Secoli R, Burrows C, et al., 2018,
Robotic-assisted steered needles aim to accurately control the deflection of the flexible needle’s tip to achieve accurate path following. In doing so, they can decrease trauma to the patient, by avoiding sensitive regions while increasing placement accuracy. This class of needle presents more complicated kinematics compared to straight needles, which can be exploited to produce specific motion profiles via careful controller design and tuning. Motion profiles can be optimized to minimize certain conditions such as maximum tissue deformation and target migration, which was the goal of the formalized cyclic, low-level controller for a Programmable Bevel-tip Needle (PBN) presented in this work. PBNs are composed of a number of interlocked segments that are able to slide with respect to one another. Producing a controlled, desired offset of the tip geometry leads to the corresponding desired curvature of the PBN, and hence desired path trajectory of the system. Here, we propose a cyclical actuation strategy, where the tip configuration is achieved over a number of reciprocal motion cycles, which we hypothesize will reduce tissue deformation during the insertion process. A series of in vitro, planar needle insertion experiments are performed in order to compare the cyclic controller performance with the previously used direct push controller, in terms of targeting accuracy and tissue deformation. It is found that there is no significant difference between the target tracking performance of the controllers, but a significant decrease in axial tissue deformation when using the cyclic controller.
Conference paperAvila Rencoret FB, Mylonas G, Elson D, 2018,
This paper describes a novel robotic framework for wide-field optical biopsy endoscopy, characterizes in vitro its spatial and spectral resolution, real time hyperspectral tissue classification, and demonstrates its feasibility on fresh porcine cadaveric colon.
Journal articleVespa E, Nikolov N, Grimm M, et al., 2018,
We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10-40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT*).
Journal articleFischer T, Puigbo J-Y, Camilleri D, et al., 2018,
iCub-HRI: A software framework for complex human-robot interaction scenarios on the iCub humanoid robot, Frontiers in Robotics and AI, Vol: 5, Pages: 1-9, ISSN: 2296-9144
Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.
Conference paperSaputra RP, Kormushev P, 2018,
Performing search and rescue missions in disaster-struck environments is challenging. Despite the advances in the robotic search phase of the rescue missions, few works have been focused on the physical casualty extraction phase. In this work, we propose a mobile rescue robot that is capable of performing a safe casualty extraction routine. To perform this routine, this robot adopts a loco-manipulation approach. We have designed and built a mobile rescue robot platform called ResQbot as a proof of concept of the proposed system. We have conducted preliminary experiments using a sensorised human-sized dummy as a victim, to confirm that the platform is capable of performing a safe casualty extraction procedure.
Conference paperNica A, Vespa E, González de Aledo P, et al., 2018,
Simultaneous Localization And Mapping (SLAM) is the problem of building a representation of a geometric space while simultaneously estimating the observer’s location within the space. While this seems to be a chicken-and-egg problem, several algorithms have appeared in the last decades that approximately and iteratively solve this problem. SLAM algorithms are tailored to the available resources, hence aimed at balancing the precision of the map with the constraints that the computational platform imposes and the desire to obtain real-time results. Working with KinectFusion, an established SLAM implementation, we explore in this work the vectorization opportunities present in this scenario, with the goal of using the CPU to its full potential. Using ISPC, an automatic vectorization tool, we produce a partially vectorized version of KinectFusion. Along the way we explore a number of optimization strategies, among which tiling to exploit ray-coherence and outer loop vectorization, obtaining up to 4x speed-up over the baseline on an 8-wide vector machine.
Conference paperEscribano Macias J, Angeloudis P, Ochieng W, 2018,
AIAA Integrated Trajectory-Location-Routing for Rapid Humanitarian Deliveries using Unmanned Aerial Vehicles, 2018 Aviation Technology, Integration, and Operations Conference
Conference paperAvila Rencoret FB, Mylonas GP, Elson D, 2018,
Robotic Wide-Field Optical Biopsy Imaging For Flexible Endoscopy, 26th International Congress of the European Association for Endoscopic Surgery (EAES)
Conference paperTavakoli A, Pardo F, Kormushev P, 2018,
Action branching architectures for deep reinforcement learning, AAAI 2018, Publisher: AAAI
Discrete-action algorithms have been central to numerousrecent successes of deep reinforcement learning. However,applying these algorithms to high-dimensional action tasksrequires tackling the combinatorial increase of the numberof possible actions with the number of action dimensions.This problem is further exacerbated for continuous-actiontasks that require fine control of actions via discretization.In this paper, we propose a novel neural architecture fea-turing a shared decision module followed by several net-workbranches, one for each action dimension. This approachachieves a linear increase of the number of network outputswith the number of degrees of freedom by allowing a level ofindependence for each individual action dimension. To illus-trate the approach, we present a novel agent, called Branch-ing Dueling Q-Network (BDQ), as a branching variant ofthe Dueling Double Deep Q-Network (Dueling DDQN). Weevaluate the performance of our agent on a set of challeng-ing continuous control tasks. The empirical results show thatthe proposed agent scales gracefully to environments with in-creasing action dimensionality and indicate the significanceof the shared decision module in coordination of the dis-tributed action branches. Furthermore, we show that the pro-posed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic PolicyGradient (DDPG).
Conference paperElson D, Avila Rencoret F, Mylonas G, 2018,
Robotic Wide-Field Optical Biopsy Imaging for Flexible Endoscopy (Gerhard Buess Technology Award), 26th Annual International EAES Congress
Conference paperZhao M, Oude Vrielink T, Elson D, et al., 2018,
Endoscopic TORS-CYCLOPS: A Novel Cable-driven Parallel Robot for Transoral Laser Surgery, 26th Annual International EAES Congress
Book chapterPorta JM, Rojas N, Thomas F, 2018,
Distance constraints are an emerging formulation that offers intuitive geometrical interpretation of otherwise complex problems. The formulation can be applied in problems such as position and singularity analysis and path planning of mechanisms and structures. This paper reviews the recent advances in distance geometry, providing a unified view of these apparently disparate problems. This survey reviews algebraic and numerical techniques, and is, to the best of our knowledge, the first attempt to summarize the different approaches relating to distance-based formulations.
Conference paperKanajar P, Caldwell DG, Kormushev P, 2017,
Incremental progress in humanoid robot locomotion over the years has achieved important capabilities such as navigation over flat or uneven terrain, stepping over small obstacles and climbing stairs. However, the locomotion research has mostly been limited to using only bipedal gait and only foot contacts with the environment, using the upper body for balancing without considering additional external contacts. As a result, challenging locomotion tasks like climbing over large obstacles relative to the size of the robot have remained unsolved. In this paper, we address this class of open problems with an approach based on multi-body contact motion planning guided through physical human demonstrations. Our goal is to make the humanoid locomotion problem more tractable by taking advantage of objects in the surrounding environment instead of avoiding them. We propose a multi-contact motion planning algorithm for humanoid robot locomotion which exploits the whole-body motion and multi-body contacts including both the upper and lower body limbs. The proposed motion planning algorithm is applied to a challenging task of climbing over a large obstacle. We demonstrate successful execution of the climbing task in simulation using our multi-contact motion planning algorithm initialized via a transfer from real-world human demonstrations of the task and further optimized.
Conference paperZhang F, Cully A, Demiris YIANNIS, 2017,
Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments.
Conference paperRakicevic N, Kormushev P, 2017,
Efficient Robot Task Learning and Transfer via Informed Search in Movement Parameter Space, Workshop on Acting and Interacting in the Real World: Challenges in Robot Learning, 31st Conference on Neural Information Processing Systems (NIPS 2017)
Conference paperTavakoli A, Pardo F, Kormushev P, 2017,
Action Branching Architectures for Deep Reinforcement Learning, Deep Reinforcement Learning Symposium, 31st Conference on Neural Information Processing Systems (NIPS 2017)
Conference paperChoi J, Chang HJ, Yun S, et al., 2017,
We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency. The subset of filters is adaptively selected by a deep attentional network according to the dynamic properties of the tracking target. Our contributions are manifold, and are summarised as follows: (i) Introducing the Attentional Correlation Filter Network which allows adaptive tracking of dynamic targets. (ii) Utilising an attentional network which shifts the attention to the best candidate modules, as well as predicting the estimated accuracy of currently inactive modules. (iii) Enlarging the variety of correlation filters which cover target drift, blurriness, occlusion, scale changes, and flexible aspect ratio. (iv) Validating the robustness and efficiency of the attentional mechanism for visual tracking through a number of experiments. Our method achieves similar performance to non real-time trackers, and state-of-the-art performance amongst real-time trackers.
Conference paperYoo YJ, Chang H, Yun S, et al., 2017,
This paper proposes a new high dimensional regression method by merging Gaussian process regression into a variational autoencoder framework. In contrast to other regression methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. Our contributions are summarized as follows: (i) A new regression method estimating high dimensional image responses, which is not handled by existing regression algorithms, is proposed. (ii) The proposed regression method introduces a strategy to learn the latent space as well as the encoder and decoder so that the result of the regressed response in the latent space coincide with the corresponding response in the data space. (iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework. We demonstrate the robustness and effectiveness of our method through a number of experiments on various visual data regression problems.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.