Below is a list of all relevant publications authored by Robotics Forum members.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Escribano Macias J, Angeloudis P, Ochieng W, 2020,

    Optimal hub selection for rapid medical deliveries using unmanned aerial vehicles

    , Transportation Research Part C: Emerging Technologies, Vol: 110, Pages: 56-80, ISSN: 0968-090X

    Unmanned Aerial Vehicles (UAVs) are being increasingly deployed in humanitarian response operations. Beyond regulations, vehicle range and integration with the humanitarian supply chain inhibit their deployment. To address these issues, we present a novel bi-stage operational planning approach that consists of a trajectory optimisation algorithm (that considers multiple flight stages), and a hub selection-routing algorithm that incorporates a new battery management heuristic. We apply the algorithm to a hypothetical response mission in Taiwan after the Chi-Chi earthquake of 1999 considering mission duration and distribution fairness. Our analysis indicates that UAV fleets can be used to provide rapid relief to populations of 20,000 individuals in under 24 h. Additionally, the proposed methodology achieves significant reductions in mission duration and battery stock requirements with respect to conservative energy estimations and other heuristics.

  • Journal article
    Zambelli M, Cully A, Demiris Y, 2020,

    Multimodal representation models for prediction and control from partial information

    , Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

    Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

  • Conference paper
    Buizza C, Fischer T, Demiris Y, 2020,

    Real-time multi-person pose tracking using data assimilation

    , IEEE Winter Conference on Applications of Computer Vision, Publisher: IEEE

    We propose a framework for the integration of data assimilation and machine learning methods in human pose estimation, with the aim of enabling any pose estimation method to be run in real-time, whilst also increasing consistency and accuracy. Data assimilation and machine learning are complementary methods: the former allows us to make use of information about the underlying dynamics of a system but lacks the flexibility of a data-based model, which we can instead obtain with the latter. Our framework presents a real-time tracking module for any single or multi-person pose estimation system. Specifically, tracking is performed by a number of Kalman filters initiated for each new person appearing in a motion sequence. This permits tracking of multiple skeletons and reduces the frequency that computationally expensive pose estimation has to be run, enabling online pose tracking. The module tracks for N frames while the pose estimates are calculated for frame (N+1). This also results in increased consistency of person identification and reduced inaccuracies due to missing joint locations and inversion of left-and right-side joints.

  • Conference paper
    Liu S, Davison A, Johns E, 2019,

    Self-supervised generalisation with meta auxiliary learning

    , 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Publisher: Neural Information Processing Systems Foundation, Inc.

    Learning with auxiliary tasks can improve the ability of a primary task to generalise.However, this comes at the cost of manually labelling auxiliary data. We propose anew method which automatically learns appropriate labels for an auxiliary task,such that any supervised learning task can be improved without requiring access toany further data. The approach is to train two neural networks: a label-generationnetwork to predict the auxiliary labels, and a multi-task network to train theprimary task alongside the auxiliary task. The loss for the label-generation networkincorporates the loss of the multi-task network, and so this interaction between thetwo networks can be seen as a form of meta learning with a double gradient. Weshow that our proposed method, Meta AuXiliary Learning (MAXL), outperformssingle-task learning on 7 image datasets, without requiring any additional data.We also show that MAXL outperforms several other baselines for generatingauxiliary labels, and is even competitive when compared with human-definedauxiliary labels. The self-supervised nature of our method leads to a promisingnew direction towards automated generalisation. Source code can be found athttps://github.com/lorenmt/maxl.

  • Journal article
    Rakicevic N, Kormushev P, 2019,

    Active learning via informed search in movement parameter space for efficient robot task learning and transfer

    , Autonomous Robots, Vol: 43, Pages: 1917-1935, ISSN: 0929-5593

    Learning complex physical tasks via trial-and-error is still challenging for high-degree-of-freedom robots. Greatest challenges are devising a suitable objective function that defines the task, and the high sample complexity of learning the task. We propose a novel active learning framework, consisting of decoupled task model and exploration components, which does not require an objective function. The task model is specific to a task and maps the parameter space, defining a trial, to the trial outcome space. The exploration component enables efficient search in the trial-parameter space to generate the subsequent most informative trials, by simultaneously exploiting all the information gained from previous trials and reducing the task model’s overall uncertainty. We analyse the performance of our framework in a simulation environment and further validate it on a challenging bimanual-robot puck-passing task. Results show that the robot successfully acquires the necessary skills after only 100 trials without any prior information about the task or target positions. Decoupling the framework’s components also enables efficient skill transfer to new environments which is validated experimentally.

  • Journal article
    Neerincx MA, van Vught W, Henkemans OB, Oleari E, Broekens J, Peters R, Kaptein F, Demiris Y, Kiefer B, Fumagalli D, Bierman Bet al., 2019,

    Socio-cognitive engineering of a robotic partner for child's diabetes self-management

    , Frontiers in Robotics and AI, Vol: 6, Pages: 1-16, ISSN: 2296-9144

    Social or humanoid robots do hardly show up in “the wild,” aiming at pervasive and enduring human benefits such as child health. This paper presents a socio-cognitive engineering (SCE) methodology that guides the ongoing research & development for an evolving, longer-lasting human-robot partnership in practice. The SCE methodology has been applied in a large European project to develop a robotic partner that supports the daily diabetes management processes of children, aged between 7 and 14 years (i.e., Personal Assistant for a healthy Lifestyle, PAL). Four partnership functions were identified and worked out (joint objectives, agreements, experience sharing, and feedback & explanation) together with a common knowledge-base and interaction design for child's prolonged disease self-management. In an iterative refinement process of three cycles, these functions, knowledge base and interactions were built, integrated, tested, refined, and extended so that the PAL robot could more and more act as an effective partner for diabetes management. The SCE methodology helped to integrate into the human-agent/robot system: (a) theories, models, and methods from different scientific disciplines, (b) technologies from different fields, (c) varying diabetes management practices, and (d) last but not least, the diverse individual and context-dependent needs of the patients and caregivers. The resulting robotic partner proved to support the children on the three basic needs of the Self-Determination Theory: autonomy, competence, and relatedness. This paper presents the R&D methodology and the human-robot partnership framework for prolonged “blended” care of children with a chronic disease (children could use it up to 6 months; the robot in the hospitals and diabetes camps, and its avatar at home). It represents a new type of human-agent/robot systems with an evolving collective intelligence. The underlying ontology and design rationale can be used

  • Conference paper
    Schettino V, Demiris Y, 2019,

    Inference of user-intention in remote robot wheelchair assistance using multimodal interfaces

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 4600-4606, ISSN: 2153-0858
  • Conference paper
    Cortacero K, Fischer T, Demiris Y, 2019,

    RT-BENE: A Dataset and Baselines for Real-Time Blink Estimation in Natural Environments

    , IEEE International Conference on Computer Vision Workshops, Publisher: Institute of Electrical and Electronics Engineers Inc.

    In recent years gaze estimation methods have made substantial progress, driven by the numerous application areas including human-robot interaction, visual attention estimation and foveated rendering for virtual reality headsets. However, many gaze estimation methods typically assume that the subject's eyes are open; for closed eyes, these methods provide irregular gaze estimates. Here, we address this assumption by first introducing a new open-sourced dataset with annotations of the eye-openness of more than 200,000 eye images, including more than 10,000 images where the eyes are closed. We further present baseline methods that allow for blink detection using convolutional neural networks. In extensive experiments, we show that the proposed baselines perform favourably in terms of precision and recall. We further incorporate our proposed RT-BENE baselines in the recently presented RT-GENE gaze estimation framework where it provides a real-time inference of the openness of the eyes. We argue that our work will benefit both gaze estimation and blink estimation methods, and we take steps towards unifying these methods.

  • Journal article
    Taniguchi T, Ugur E, Ogata T, Nagai T, Demiris Yet al., 2019,

    Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics

    , FRONTIERS IN NEUROROBOTICS, Vol: 13, ISSN: 1662-5218
  • Conference paper
    Ezzat A, Thakkar R, Kogkas A, Mylonas Get al., 2019,

    Perceptions of surgeons and scrub nurses towards a novel eye-tracking based robotic scrub nurse platform

    , International Surgical Congress of the Association-of-Surgeons-of-Great-Britain-and-Ireland (ASGBI), Publisher: WILEY, Pages: 81-82, ISSN: 0007-1323
  • Conference paper
    Saeedi S, Carvalho EDC, Li W, Tzoumanikas D, Leutenegger S, Kelly PHJ, Davison AJet al., 2019,

    Characterizing visual localization and mapping datasets

    , 2019 International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers, ISSN: 1050-4729

    Benchmarking mapping and motion estimation algorithms is established practice in robotics and computer vision. As the diversity of datasets increases, in terms of the trajectories, models, and scenes, it becomes a challenge to select datasets for a given benchmarking purpose. Inspired by the Wasserstein distance, this paper addresses this concern by developing novel metrics to evaluate trajectories and the environments without relying on any SLAM or motion estimation algorithm. The metrics, which so far have been missing in the research community, can be applied to the plethora of datasets that exist. Additionally, to improve the robotics SLAM benchmarking, the paper presents a new dataset for visual localization and mapping algorithms. A broad range of real-world trajectories is used in very high-quality scenes and a rendering framework to create a set of synthetic datasets with ground-truth trajectory and dense map which are representative of key SLAM applications such as virtual reality (VR), micro aerial vehicle (MAV) flight, and ground robotics.

  • Conference paper
    Bujanca M, Gafton P, Saeedi S, Nisbet A, Bodin B, O'Boyle MFP, Davison AJ, Paul HJ K, Riley G, Lennox B, Lujan M, Furber Set al., 2019,

    SLAMBench 3.0: Systematic automated reproducible evaluation of SLAM systems for robot vision challenges and scene understanding

    , 2019 International Conference on Robotics and Automation (ICRA), Publisher: Institute of Electrical and Electronics Engineers, ISSN: 1050-4729

    As the SLAM research area matures and the number of SLAM systems available increases, the need for frameworks that can objectively evaluate them against prior work grows. This new version of SLAMBench moves beyond traditional visual SLAM, and provides new support for scene understanding and non-rigid environments (dynamic SLAM). More concretely for dynamic SLAM, SLAMBench 3.0 includes the first publicly available implementation of DynamicFusion, along with an evaluation infrastructure. In addition, we include two SLAM systems (one dense, one sparse) augmented with convolutional neural networks for scene understanding, together with datasets and appropriate metrics. Through a series of use-cases, we demonstrate the newly incorporated algorithms, visulation aids and metrics (6 new metrics, 4 new datasets and 5 new algorithms).

  • Conference paper
    Avery J, Runciman M, Darzi A, Mylonas GPet al., 2019,

    Shape sensing of variable stiffness soft robots using electrical impedance tomography

    , International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 9066-9072, ISSN: 1050-4729

    Soft robotic systems offer benefits over traditional rigid systems through reduced contact trauma with soft tissues and by enabling access through tortuous paths in minimally invasive surgery. However, the inherent deformability of soft robots places both a greater onus on accurate modelling of their shape, and greater challenges in realising intraoperative shape sensing. Herein we present a proprioceptive (self-sensing) soft actuator, with an electrically conductive working fluid. Electrical impedance measurements from up to six electrodes enabled tomographic reconstructions using Electrical Impedance Tomography (EIT). A new Frequency Division Multiplexed (FDM) EIT system was developed capable of measurements of 66 dB SNR with 20 ms temporal resolution. The concept was examined in two two-degree-of-freedom designs: a hydraulic hinged actuator and a pneumatic finger actuator with hydraulic beams. Both cases demonstrated that impedance measurements could be used to infer shape changes, and EIT images reconstructed during actuation showed distinct patterns with respect to each degree of freedom (DOF). Whilst there was some mechanical hysteresis observed, the repeatability of the measurements and resultant images was high. The results show the potential of FDM-EIT as a low-cost, low profile shape sensor in soft robots.

  • Journal article
    Runciman M, Darzi A, Mylonas G, 2019,

    Soft robotics in minimally invasive surgery

    , Soft Robotics, Vol: 6, Pages: 423-443, ISSN: 2169-5172

    Soft robotic devices have desirable traits for applications in minimally invasive surgery (MIS) but many interdisciplinary challenges remain unsolved. To understand current technologies, we carried out a keyword search using the Web of Science and Scopus databases, applied inclusion and exclusion criteria, and compared several characteristics of the soft robotic devices for MIS in the resulting articles. There was low diversity in the device designs and a wide-ranging level of detail regarding their capabilities. We propose a standardised comparison methodology to characterise soft robotics for various MIS applications, which will aid designers producing the next generation of devices.

  • Conference paper
    Fathi J, Vrielink TJCO, Runciman MS, Mylonas GPet al., 2019,

    A Deployable Soft Robotic Arm with Stiffness Modulation for Assistive Living Applications

    , International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 1479-1485, ISSN: 1050-4729
  • Journal article
    Zhang F, Cully A, Demiris Y, 2019,

    Probabilistic real-time user posture tracking for personalized robot-assisted dressing

    , IEEE Transactions on Robotics, Vol: 35, Pages: 873-888, ISSN: 1552-3098

    Robotic solutions to dressing assistance have the potential to provide tremendous support for elderly and disabled people. However, unexpected user movements may lead to dressing failures or even pose a risk to the user. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. In this paper, we propose a probabilistic tracking method using Bayesian networks in latent spaces, which fuses robot end-effector positions and force information to enable cameraless and real-time estimation of the user postures during dressing. The latent spaces are created before dressing by modeling the user movements with a Gaussian process latent variable model, taking the user’s movement limitations into account. We introduce a robot-assisted dressing system that combines our tracking method with hierarchical multitask control to minimize the force between the user and the robot. The experimental results demonstrate the robustness and accuracy of our tracking method. The proposed method enables the Baxter robot to provide personalized dressing assistance in putting on a sleeveless jacket for users with (simulated) upper-body impairments.

  • Conference paper
    Falck F, Larppichet K, Kormushev P, 2019,

    DE VITO: A dual-arm, high degree-of-freedom, lightweight, inexpensive, passive upper-limb exoskeleton for robot teleoperation

    , TAROS: Annual Conference Towards Autonomous Robotic Systems, Publisher: Springer, ISSN: 0302-9743

    While robotics has made significant advances in perception, planning and control in recent decades, the vast majority of tasks easily completed by a human, especially acting in dynamic, unstructured environments, are far from being autonomously performed by a robot. Teleoperation, remotely controlling a slave robot by a human operator, can be a realistic, complementary transition solution that uses the motion intelligence of a human in complex tasks while exploiting the robot’s autonomous reliability and precision in less challenging situations.We introduce DE VITO, a seven degree-of-freedom, dual-arm upper-limb exoskeleton that passively measures the pose of a human arm. DE VITO is a lightweight, simplistic and energy-efficient design with a total material cost of at least an order of magnitude less than previous work. Given the estimated human pose, we implement both joint and Cartesian space kinematic control algorithms and present qualitative experimental results on various complex manipulation tasks teleoperating Robot DE NIRO, a research platform for mobile manipulation, that demonstrate the functionality of DE VITO. We provide the CAD models, open-source code and supplementary videos of DE VITO at http://www.imperial.ac.uk/robot-intelligence/robots/de_vito/.

  • Conference paper
    AlAttar A, Rouillard L, Kormushev P, 2019,

    Autonomous air-hockey playing cobot using optimal control and vision-based Bayesian tracking

    , Towards Autonomous Robotic Systems, Publisher: Springer, ISSN: 0302-9743

    This paper presents a novel autonomous air-hockey playing collaborative robot (cobot) that provides human-like gameplay against human opponents. Vision-based Bayesian tracking of the puck and striker are used in an Analytic Hierarchy Process (AHP)-based probabilistic tactical layer for high-speed perception. The tactical layer provides commands for an active control layer that controls the Cartesian position and yaw angle of a custom end effector. The active layer uses optimal control of the cobot’s posture inside the task nullspace. The kinematic redundancy is resolved using a weighted Moore-Penrose pseudo-inversion technique. Experiments with human players show high-speed human-like gameplay with potential applications in the growing field of entertainment robotics.

  • Journal article
    Lu Q, Rojas N, 2019,

    On soft fingertips for in-hand manipulation: modelling and implications for robot hand design

    , IEEE Robotics and Automation Letters, Vol: 4, Pages: 2471-2478, ISSN: 2377-3766

    Contact models for soft fingertips are able to precisely computedeformation when information about contact forces and object position is known, thus improving the traditional soft finger contact model. However, the functionality of these approaches for the study of in-hand manipulation with robot hands has been shown to be limited, since the location of the manipulated object is uncertain due to compliance and closed-loop constraints. This paper presents a novel, tractable approach for contact modelling of soft fingertips in within-hand dexterous manipulation settings. The proposed method is based on a relaxation of the kinematic equivalent of point contact with friction, modelling the interaction between fingertips and objects as joints with clearances rather than ideal instances, and then approximating clearances via affine arithmetic to facilitate computation. These ideas are introduced using planar manipulation to aid discussion, and are used to predict the reachable workspace of a two-fingered robot hand with fingertips of different hardness and geometry. Numerical and empirical experiments are conducted to analyse the effects of soft fingertips on manipulation operability; results demonstrate the functionality of the proposed approach, as well as a tradeoff between hardness and depth in soft fingertips to achieve better manipulation performance of dexterous robot hands.

  • Journal article
    Clark A, Rojas N, 2019,

    Assessing the performance of variable stiffness continuum structures of large diameter

    , IEEE Robotics and Automation Letters, Vol: 4, Pages: 2455-2462, ISSN: 2377-3766

    Variable stiffness continuum structures of large diameters are suitable for high-capability robots, such as in industrial practices where high loads and human–robot interaction are expected. Existing variable stiffness technologies have focused on application as medical manipulators, and as such have been limited to small diameter designs ( $\sim$ 15 mm). Various performance metrics have been presented for continuum structures thus far, focusing on force resistance, but no universal testing methodology for continuum structures that encapsulates their overall performance has been provided. This letter presents five individual qualities that can be experimentally quantified to establish the overall performance capability of a design with respect to its use as a variable stiffness continuum manipulator. Six large diameter ( $>$ 40 mm) continuum structures are developed following both conventional (granular and layer jamming) and novel (hybrid designs and structurally supported layer jamming) approaches and are compared using the presented testing methodology. The development of the continuum structures is discussed, and a detailed insight into the tested quality selection and experimental methodology is presented. Results of experiments demonstrate the suitability of the proposed approach for assessing variable stiffness continuum capability across the design.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1128&limit=20&page=3&respub-action=search.html Current Millis: 1621269365402 Current Time: Mon May 17 17:36:05 BST 2021