Below is a list of all relevant publications authored by Robotics Forum members.

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Baron N, Philippides A, Rojas N, 2020,

    On the false positives and false negatives of the Jacobian matrix in kinematically redundant parallel mechanisms

    , IEEE Transactions on Robotics, Vol: 36, ISSN: 1552-3098

    The Jacobian matrix is a highly popular tool for the control and performance analysis of closed-loop robots. Its usefulness in parallel mechanisms is certainly apparent, and its application to solve motion planning problems, or other higher level questions, has been seldom queried, or limited to non-redundant systems. In this paper, we discuss the shortcomings of the use of the Jacobian matrix under redundancy, in particular when applied to kinematically redundant parallel architectures with non-serially connected actuators. These architectures have become fairly popular recently as they allow the end-effector to achieve full rotations, which is an impossible task with traditional topologies. The problems with the Jacobian matrix in these novel systems arise from the need to eliminate redundant variables when forming it, resulting in both situations where the Jacobian incorrectly identifies singularities (false positive), and where it fails to identify singularities (false negative). These issues have thus far remained unaddressed in the literature. We highlight these limitations herein by demonstrating several cases using numerical examples of both planar and spatial architectures.

  • Conference paper
    Zhang F, Demiris Y, 2020,

    Learning grasping points for garment manipulation in robot-assisted dressing

    , 2020 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 9114-9120

    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. Recent studies on robot-assisted dressing usually simplify the setup of the initial robot configuration by manually attaching the garments on the robot end-effector and positioning them close to the user's arm. A fundamental challenge in automating such a process for robots is computing suitable grasping points on garments that facilitate robotic manipulation. In this paper, we address this problem by introducing a supervised deep neural network to locate a predefined grasping point on the garment, using depth images for their invariance to color and texture. To reduce the amount of real data required, which is costly to collect, we leverage the power of simulation to produce large amounts of labeled data. The network is jointly trained with synthetic datasets of depth images and a limited amount of real data. We introduce a robot-assisted dressing system that combines the grasping point prediction method, with a grasping and manipulation strategy which takes grasping orientation computation and robot-garment collision avoidance into account. The experimental results demonstrate that our method is capable of yielding accurate grasping point estimations. The proposed dressing system enables the Baxter robot to autonomously grasp a hospital gown hung on a rail, bring it close to the user and successfully dress the upper-body.

  • Journal article
    Baron N, Philippides A, Rojas N, 2020,

    A robust geometric method of singularity avoidance for kinematically redundant planar parallel robot manipulators

    , Mechanism and Machine Theory, Vol: 151, Pages: 103863-103863, ISSN: 0094-114X
  • Journal article
    Lu Q, Clark A, Shen M, Rojas Net al., 2020,

    An origami-inspired variable friction surface for increasing the dexterity of robotic grippers

    , IEEE Robotics and Automation Letters, Vol: 5, Pages: 2538-2545, ISSN: 2377-3766

    While the grasping capability of robotic grippers has shown significant development, the ability to manipulate objects within the hand is still limited. One explanation for this limitation is the lack of controlled contact variation between the grasped object and the gripper. For instance, human hands have the ability to firmly grip object surfaces, as well as slide over object faces, an aspect that aids the enhanced manipulation of objects within the hand without losing contact. In this letter, we present a parametric, origami-inspired thin surface capable of transitioning between a high friction and a low friction state, suitable for implementation as an epidermis in robotic fingers. A numerical analysis of the proposed surface based on its design parameters, force analysis, and performance in in-hand manipulation tasks is presented. Through the development of a simple two-fingered two-degree-of-freedom gripper utilizing the proposed variable-friction surfaces with different parameters, we experimentally demonstrate the improved manipulation capabilities of the hand when compared to the same gripper without changeable friction. Results show that the pattern density and valley gap are the main parameters that effect the in-hand manipulation performance. The origami-inspired thin surface with a higher pattern density generated a smaller valley gap and smaller height change, producing a more stable improvement of the manipulation capabilities of the hand.

  • Journal article
    He L, Lu Q, Abad S-A, Rojas N, Nanayakkara DPTet al., 2020,

    Soft fingertips with tactile sensing and active deformation for robust grasping of delicate objects

    , IEEE Robotics and Automation Letters, Vol: 5, Pages: 2714-2721, ISSN: 2377-3766

    Soft fingertips have shown significant adaptability for grasping a wide range of object shapes, thanks to elasticity. This ability can be enhanced to grasp soft, delicate objects by adding touch sensing. However, in these cases, the complete restraint and robustness of the grasps have proved to be challenging, as the exertion of additional forces on the fragile object can result in damage. This letter presents a novel soft fingertip design for delicate objects based on the concept of embedded air cavities, which allow the dual ability of tactile sensing and active shape-changing. The pressurized air cavities act as soft tactile sensors to control gripper position from internal pressure variation; and active fingertip deformation is achieved by applying positive pressure to these cavities, which then enable a delicate object to be kept securely in position, despite externally applied forces, by form closure. We demonstrate this improved grasping capability by comparing the displacement of grasped delicate objects exposed to high-speed motions. Results show that passive soft fingertips fail to restrain fragile objects at accelerations as low as 0.1 m/s 2 , in contrast, with the proposed fingertips delicate objects are completely secure even at accelerations of more than 5 m/s 2 .

  • Journal article
    Tsai Y-Y, Xiao B, Johns E, Yang G-Zet al., 2020,

    Constrained-Space Optimization and Reinforcement Learning for Complex Tasks

    , IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 5, Pages: 683-690, ISSN: 2377-3766
  • Journal article
    Zhao M, Oude Vrielink TJC, Kogkas A, Runciman M, Elson D, Mylonas Get al., 2020,

    LaryngoTORS: a novel cable-driven parallel robotic system for transoral laser phonosurgery

    , IEEE Robotics and Automation Letters, Vol: 5, Pages: 1516-1523, ISSN: 2377-3766

    Transoral laser phonosurgery is a commonly used surgical procedure in which a laser beam is used to perform incision, ablation or photocoagulation of laryngeal tissues. Two techniques are commonly practiced: free beam and fiber delivery. For free beam delivery, a laser scanner is integrated into a surgical microscope to provide an accurate laser scanning pattern. This approach can only be used under direct line of sight, which may cause increased postoperative pain to the patient and injury, is uncomfortable for the surgeon during prolonged operations, the manipulability is poor and extensive training is required. In contrast, in the fiber delivery technique, a flexible fiber is used to transmit the laser beam and therefore does not require direct line of sight. However, this can only achieve manual level accuracy, repeatability and velocity, and does not allow for pattern scanning. Robotic systems have been developed to overcome the limitations of both techniques. However, these systems offer limited workspace and degrees-of-freedom (DoF), limiting their clinical applicability. This work presents the LaryngoTORS, a robotic system that aims at overcoming the limitations of the two techniques, by using a cable-driven parallel mechanism (CDPM) attached at the end of a curved laryngeal blade for controlling the end tip of the laser fiber. The system allows autonomous generation of scanning patterns or user driven freepath scanning. Path scan validation demonstrated errors as low as 0.054±0.028 mm and high repeatability of 0.027±0.020 mm (6×2 mm arc line). Ex vivo tests on chicken tissue have been carried out. The results show the ability of the system to overcome limitations of current methods with high accuracy and repeatability using the superior fiber delivery approach.

  • Journal article
    Liow L, Clark A, Rojas N, 2020,

    OLYMPIC: a modular, tendon-driven prosthetic hand with novel finger and wrist coupling mechanisms

    , IEEE Robotics and Automation Letters, Vol: 5, Pages: 299-306, ISSN: 2377-3766

    Prosthetic hands, while having shown significant progress in affordability, typically suffer from limited repairability, specifically by the user themselves. Several modular hands have been proposed to address this, but these solutions require handling of intricate components or are unsuitable for prosthetic use due to the large volume and weight resulting from added mechanical complexity to achieve this modularity. In this paper, we propose a fully modular design for a prosthetic hand with finger and wrist level modularity, allowing the removal and attachment of tendon-driven fingers without the need for tools, retendoning, and rewiring. Our innovative design enables placement of the motors behind the hand for remote actuation of the tendons, which are contained solely within the fingers. Details of the novel coupling-transmission mechanisms enabling this are presented; and the capabilities of a prototype using a control-independent grasping benchmark are discussed. The modular detachment torque of the fingers is also computed to analyse the trade-off between intentional removal and the ability to withstand external loads. Experiment results demonstrate that the prosthetic hand is able to grasp a wide range of household and food items, of different shape, size, and weight, without resulting in the ejection of fingers, while allowing a user to remove them easily using a single hand.

  • Journal article
    Gao Y, Chang HJ, Demiris Y, 2020,

    User modelling using multimodal information for personalised dressing assistance

    , IEEE Access, Vol: 8, Pages: 45700-45714, ISSN: 2169-3536
  • Conference paper
    Nunes UM, Demiris Y, 2020,

    Online unsupervised learning of the 3D kinematic structure of arbitrary rigid bodies

    , IEEE/CVF International Conference on Computer Vision (ICCV), Publisher: IEEE Computer Soc, Pages: 3808-3816, ISSN: 1550-5499

    This work addresses the problem of 3D kinematic structure learning of arbitrary articulated rigid bodies from RGB-D data sequences. Typically, this problem is addressed by offline methods that process a batch of frames, assuming that complete point trajectories are available. However, this approach is not feasible when considering scenarios that require continuity and fluidity, for instance, human-robot interaction. In contrast, we propose to tackle this problem in an online unsupervised fashion, by recursively maintaining the metric distance of the scene's 3D structure, while achieving real-time performance. The influence of noise is mitigated by building a similarity measure based on a linear embedding representation and incorporating this representation into the original metric distance. The kinematic structure is then estimated based on a combination of implicit motion and spatial properties. The proposed approach achieves competitive performance both quantitatively and qualitatively in terms of estimation accuracy, even compared to offline methods.

  • Conference paper
    Pardo F, Levdik V, Kormushev P, 2020,

    Scaling all-goals updates in reinforcement learning using convolutional neural networks

    , 34th AAAI Conference on Artificial Intelligence (AAAI 2020), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 5355-5362, ISSN: 2374-3468

    Being able to reach any desired location in the environmentcan be a valuable asset for an agent. Learning a policy to nav-igate between all pairs of states individually is often not fea-sible. Anall-goals updatingalgorithm uses each transitionto learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallellimited the approach to small tabular cases so far. To tacklethis problem we propose to use convolutional network archi-tectures to generate Q-values and updates for a large numberof goals at once. We demonstrate the accuracy and generaliza-tion qualities of the proposed method on randomly generatedmazes and Sokoban puzzles. In the case of on-screen goalcoordinates the resulting mapping from frames todistance-mapsdirectly informs the agent about which places are reach-able and in how many steps. As an example of applicationwe show that replacing the random actions inε-greedy ex-ploration by several actions towards feasible goals generatesbetter exploratory trajectories on Montezuma’s Revenge andSuper Mario All-Stars games.

  • Conference paper
    Matheson E, Secoli R, Galvan S, Baena FRYet al., 2020,

    Human-robot visual interface for 3D steering of a flexible, bioinspired needle for neurosurgery

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

    Robotic minimally invasive surgery has been a subject of intense research and development over the last three decades, due to the clinical advantages it holds for patients and doctors alike. Particularly for drug delivery mechanisms, higher precision and the ability to follow complex trajectories in three dimensions (3D), has led to interest in flexible, steerable needles such as the programmable bevel-tip needle (PBN). Steering in 3D, however, holds practical challenges for surgeons, as interfaces are traditionally designed for straight line paths. This work presents a pilot study undertaken to evaluate a novel human-machine visual interface for the steering of a robotic PBN, where both qualitative evaluation of the interface and quantitative evaluation of the performance of the subjects in following a 3D path are measured. A series of needle insertions are performed in phantom tissue (gelatin) by the experiment subjects. User could adequately use the system with little training and low workload, and reach the target point at the end of the path with millimeter range accuracy.

  • Conference paper
    Chacon-Quesada R, Demiris Y, 2020,

    Augmented reality controlled smart wheelchair using dynamic signifiers for affordance representation

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

    The design of augmented reality interfaces for people with mobility impairments is a novel area with great potential, as well as multiple outstanding research challenges. In this paper we present an augmented reality user interface for controlling a smart wheelchair with a head-mounted display to provide assistance for mobility restricted people. Our motivation is to reduce the cognitive requirements needed to control a smart wheelchair. A key element of our platform is the ability to control the smart wheelchair using the concepts of affordances and signifiers. In addition to the technical details of our platform, we present a baseline study by evaluating our platform through user-trials of able-bodied individuals and two different affordances: 1) Door Go Through and 2) People Approach. To present these affordances to the user, we evaluated fixed symbol based signifiers versus our novel dynamic signifiers in terms of ease to understand the suggested actions and its relation with the objects. Our results show a clear preference for dynamic signifiers. In addition, we show that the task load reported by participants is lower when controlling the smart wheelchair with our augmented reality user interface compared to using the joystick, which is consistent with their qualitative answers.

  • Conference paper
    Zolotas M, Demiris Y, 2020,

    Towards explainable shared control using augmented reality

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE, Pages: 3020-3026

    Shared control plays a pivotal role in establishing effective human-robot interactions. Traditional control-sharing methods strive to complement a human’s capabilities at safely completing a task, and thereby rely on users forming a mental model of the expected robot behaviour. However, these methods can often bewilder or frustrate users whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. To resolve this model misalignment, we introduce Explainable Shared Control as a paradigm in which assistance and information feedback are jointly considered. Augmented reality is presented as an integral component of this paradigm, by visually unveiling the robot’s inner workings to human operators. Explainable Shared Control is instantiated and tested for assistive navigation in a setup involving a robotic wheelchair and a Microsoft HoloLens with add-on eye tracking. Experimental results indicate that the introduced paradigm facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment.

  • Conference paper
    Saputra RP, Rakicevic N, Kormushev P, 2020,

    Sim-to-real learning for casualty detection from ground projected point cloud data

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

    This paper addresses the problem of human body detection-particularly a human body lying on the ground (a.k.a. casualty)-using point cloud data. This ability to detect a casualty is one of the most important features of mobile rescue robots, in order for them to be able to operate autonomously. We propose a deep-learning-based casualty detection method using a deep convolutional neural network (CNN). This network is trained to be able to detect a casualty using a point-cloud data input. In the method we propose, the point cloud input is pre-processed to generate a depth image-like ground-projected heightmap. This heightmap is generated based on the projected distance of each point onto the detected ground plane within the point cloud data. The generated heightmap-in image form-is then used as an input for the CNN to detect a human body lying on the ground. To train the neural network, we propose a novel sim-to-real approach, in which the network model is trained using synthetic data obtained in simulation and then tested on real sensor data. To make the model transferable to real data implementations, during the training we adopt specific data augmentation strategies with the synthetic training data. The experimental results show that data augmentation introduced during the training process is essential for improving the performance of the trained model on real data. More specifically, the results demonstrate that the data augmentations on raw point-cloud data have contributed to a considerable improvement of the trained model performance.

  • Conference paper
    Lu Q, Liang H, Nanayakkara DPT, Rojas Net al., 2020,

    Precise in-hand manipulation of soft objects using soft fingertips with tactile sensing and active deformation

    , IEEE International Conference on Soft Robotics, Publisher: IEEE

    While soft fingertips have shown significant development for grasping tasks, its ability to facilitate the manipulation of objects within the hand is still limited. Thanks to elasticity, soft fingertips enhance the ability to grasp soft objects. However, the in-hand manipulation of these objects has proved to be challenging, with both soft fingertips and traditional designs, as the control of coordinated fine fingertip motions and uncertainties for soft materials are intricate. This paper presents a novel technique for in-hand manipulating soft objects with precision. The approach is based on enhancing the dexterity of robot hands via soft fingertips with tactile sensing and active shape changing; such that pressurized air cavities act as soft tactile sensors to provide closed loop control of fingertip position and avoid object’s damage, and pneumatic-tuned positive-pressure deformations act as a localized soft gripper to perform additional translations and rotations. We model the deformation of the soft fingertips to predict the in-hand manipulation of soft objects and experimentally demonstrate the resulting in-hand manipulationcapabilities of a gripper of limited dexterity with an algorithm based on the proposed dual abilities. Results show that the introduced approach can ease and enhance the prehensile in-hand translation and rotation of soft objects for precision tasks across the hand workspace, without damage.

  • Conference paper
    Ding Z, Lepora N, Johns E, 2020,

    Sim-to-real transfer for optical tactile sensing

    , IEEE International Conference on Robotics and Automation, Publisher: IEEE, ISSN: 2152-4092

    Deep learning and reinforcement learning meth-ods have been shown to enable learning of flexible and complexrobot controllers. However, the reliance on large amounts oftraining data often requires data collection to be carried outin simulation, with a number of sim-to-real transfer methodsbeing developed in recent years. In this paper, we study thesetechniques for tactile sensing using the TacTip optical tactilesensor, which consists of a deformable tip with a cameraobserving the positions of pins inside this tip. We designeda model for soft body simulation which was implemented usingthe Unity physics engine, and trained a neural network topredict the locations and angles of edges when in contact withthe sensor. Using domain randomisation techniques for sim-to-real transfer, we show how this framework can be used toaccurately predict edges with less than 1 mm prediction errorin real-world testing, without any real-world data at all.

  • Conference paper
    Clark A, Rojas N, 2020,

    Design and workspace characterisation of malleable robots

    , IEEE International Conference on Robotics and Automation, Publisher: IEEE

    For the majority of tasks performed by traditionalserial robot arms, such as bin picking or pick and place, onlytwo or three degrees of freedom (DOF) are required for motion;however, by augmenting the number of degrees of freedom,further dexterity of robot arms for multiple tasks can beachieved. Instead of increasing the number of joints of a robotto improve flexibility and adaptation, which increases controlcomplexity, weight, and cost of the overall system, malleablerobots utilise a variable stiffness link between joints allowing therelative positioning of the revolute pairs at each end of the linkto vary, thus enabling a low DOF serial robot to adapt acrosstasks by varying its workspace. In this paper, we present thedesign and prototyping of a 2-DOF malleable robot, calculatethe general equation of its workspace using a parameterisationbased on distance geometry—suitable for robot arms of variabletopology, and characterise the workspace categories that theend effector of the robot can trace via reconfiguration. Throughthe design and construction of the malleable robot we exploredesign considerations, and demonstrate the viability of theoverall concept. By using motion tracking on the physical robot,we show examples of the infinite number of workspaces thatthe introduced 2-DOF malleable robot can achieve.

  • Journal article
    Runciman M, Avery J, Zhao M, Darzi A, Mylonas GPet al., 2020,

    Deployable, variable stiffness, cable driven robot for minimally invasive surgery

    , Frontiers in Robotics and AI, Vol: 6, Pages: 1-16, ISSN: 2296-9144

    Minimally Invasive Surgery (MIS) imposes a trade-off between non-invasive access and surgical capability. Treatment of early gastric cancers over 20 mm in diameter can be achieved by performing Endoscopic Submucosal Dissection (ESD) with a flexible endoscope; however, this procedure is technically challenging, suffers from extended operation times and requires extensive training. To facilitate the ESD procedure, we have created a deployable cable driven robot that increases the surgical capabilities of the flexible endoscope while attempting to minimize the impact on the access that they offer. Using a low-profile inflatable support structure in the shape of a hollow hexagonal prism, our robot can fold around the flexible endoscope and, when the target site has been reached, achieve a 73.16% increase in volume and increase its radial stiffness. A sheath around the variable stiffness structure delivers a series of force transmission cables that connect to two independent tubular end-effectors through which standard flexible endoscopic instruments can pass and be anchored. Using a simple control scheme based on the length of each cable, the pose of the two instruments can be controlled by haptic controllers in each hand of the user. The forces exerted by a single instrument were measured, and a maximum magnitude of 8.29 N observed along a single axis. The working channels and tip control of the flexible endoscope remain in use in conjunction with our robot and were used during a procedure imitating the demands of ESD was successfully carried out by a novice user. Not only does this robot facilitate difficult surgical techniques, but it can be easily customized and rapidly produced at low cost due to a programmatic design approach.

  • Conference paper
    Johns E, Liu S, Davison A, 2020,

    End-to-end multi-task learning with attention

    , The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, Publisher: IEEE

    We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1128&limit=20&page=2&respub-action=search.html Current Millis: 1621268676744 Current Time: Mon May 17 17:24:36 BST 2021