Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Russell F, Kormushev P, Vaidyanathan R, Ellison Pet al., 2020,

    The impact of ACL laxity on a bicondylar robotic knee and implications in human joint biomechanics

    , IEEE Transactions on Biomedical Engineering, Vol: 67, Pages: 217-2827, ISSN: 0018-9294

    Objective: Elucidating the role of structural mechanisms in the knee can improve joint surgeries, rehabilitation, and understanding of biped locomotion. Identification of key features, however, is challenging due to limitations in simulation and in-vivo studies. In particular the coupling of the patello-femoral and tibio-femoral joints with ligaments and its impact on joint mechanics and movement is not understood. We investigate this coupling experimentally through the design and testing of a robotic sagittal plane model. Methods: We constructed a sagittal plane robot comprised of: 1) elastic links representing cruciate ligaments; 2) a bi-condylar joint; 3) a patella; and 4) actuator hamstrings and quadriceps. Stiffness and geometry were derived from anthropometric data. 10° - 110° squatting tests were executed at speeds of 0.1 - 0.25Hz over a range of anterior cruciate ligament (ACL) slack lengths. Results: Increasing ACL length compromised joint stability, yet did not impact quadriceps mechanical advantage and force required for squat. The trend was consistent through varying condyle contact point and ligament force changes. Conclusion: The geometry of the condyles allows the ratio of quadriceps to patella tendon force to compensate for contact point changes imparted by the removal of the ACL. Thus the system maintains a constant mechanical advantage. Significance: The investigation uncovers critical features of human knee biomechanics. Findings contribute to understanding of knee ligament damage, inform procedures for knee surgery and orthopaedic implant design, and support design of trans-femoral prosthetics and walking robots. Results further demonstrate the utility of robotics as a powerful means of studying human joint biomechanics.

  • Conference paper
    Wang K, Marsh DM, Saputra RP, Chappell D, Jiang Z, Kon B, Kormushev Pet al., 2020,

    Design and control of SLIDER: an ultra-lightweight, knee-less, low-cost bipedal walking robot

    , Las Vegas, USA, International Conference on Intelligence Robots and Systems (IROS)

    Most state-of-the-art bipedal robots are designedto be highly anthropomorphic and therefore possess legs withknees. Whilst this facilitates more human-like locomotion, thereare implementation issues that make walking with straight ornear-straight legs difficult. Most bipedal robots have to movewith a constant bend in the legs to avoid singularities at theknee joints, and to keep the centre of mass at a constant heightfor control purposes. Furthermore, having a knee on the legincreases the design complexity as well as the weight of the leg,hindering the robot’s performance in agile behaviours such asrunning and jumping.We present SLIDER, an ultra-lightweight, low-cost bipedalwalking robot with a novel knee-less leg design. This nonanthropomorphic straight-legged design reduces the weight ofthe legs significantly whilst keeping the same functionality asanthropomorphic legs. Simulation results show that SLIDER’slow-inertia legs contribute to less vertical motion in the centerof mass (CoM) than anthropomorphic robots during walking,indicating that SLIDER’s model is closer to the widely usedInverted Pendulum (IP) model. Finally, stable walking onflat terrain is demonstrated both in simulation and in thephysical world, and feedback control is implemented to addresschallenges with the physical robot.

  • Journal article
    AlAttar A, Kormushev P, 2020,

    Kinematic-model-free orientation control for robot manipulation using locally weighted dual quaternions

    , Robotics, Vol: 9, Pages: 1-12, ISSN: 2218-6581

    Conventional control of robotic manipulators requires prior knowledge of their kinematic structure. Model-learning controllers have the advantage of being able to control robots without requiring a complete kinematic model and work well in less structured environments. Our recently proposed Encoderless controller has shown promising ability to control a manipulator without requiring any prior kinematic model whatsoever. However, this controller is only limited to position control, leaving orientation control unsolved. The research presented in this paper extends the state-of-the-art kinematic-model-free controller to handle orientation control to manipulate a robotic arm without requiring any prior model of the robot or any joint angle information during control. This paper presents a novel method to simultaneously control the position and orientation of a robot’s end effector using locally weighted dual quaternions. The proposed novel controller is also scaled up to control three-degrees-of-freedom robots.

  • Conference paper
    Kotonya N, Toni F, 2020,

    Explainable Automated Fact-Checking for Public Health Claims

    , The 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)
  • Conference paper
    Lertvittayakumjorn P, Specia L, Toni F, 2020,

    FIND: Human-in-the-Loop Debugging Deep Text Classifiers

    , 2020 Conference on Empirical Methods in Natural Language Processing
  • Journal article
    Cursi F, Mylonas GP, Kormushev P, 2020,

    Adaptive kinematic modelling for multiobjective control of a redundant surgical robotic tool

    , Robotics, Vol: 9, Pages: 68-68, ISSN: 2218-6581

    Accurate kinematic models are essential for effective control of surgical robots. For tendon driven robots, which are common for minimally invasive surgery, the high nonlinearities in the transmission make modelling complex. Machine learning techniques are a preferred approach to tackle this problem. However, surgical environments are rarely structured, due to organs being very soft and deformable, and unpredictable, for instance, because of fluids in the system, wear and break of the tendons that lead to changes of the system’s behaviour. Therefore, the model needs to quickly adapt. In this work, we propose a method to learn the kinematic model of a redundant surgical robot and control it to perform surgical tasks both autonomously and in teleoperation. The approach employs Feedforward Artificial Neural Networks (ANN) for building the kinematic model of the robot offline, and an online adaptive strategy in order to allow the system to conform to the changing environment. To prove the capabilities of the method, a comparison with a simple feedback controller for autonomous tracking is carried out. Simulation results show that the proposed method is capable of achieving very small tracking errors, even when unpredicted changes in the system occur, such as broken joints. The method proved effective also in guaranteeing accurate tracking in teleoperation.

  • Journal article
    Meyer H, Dawes T, Serrani M, Bai W, Tokarczuk P, Cai J, Simoes Monteiro de Marvao A, Henry A, Lumbers T, Gierten J, Thumberger T, Wittbrodt J, Ware J, Rueckert D, Matthews P, Prasad S, Costantino M, Cook S, Birney E, O'Regan Det al., 2020,

    Genetic and functional insights into the fractal structure of the heart

    , Nature, Vol: 584, Pages: 589-594, ISSN: 0028-0836

    The inner surfaces of the human heart are covered by a complex network of muscular strands that is thought to be a vestigeof embryonic development.1,2 The function of these trabeculae in adults and their genetic architecture are unknown. Toinvestigate this we performed a genome-wide association study using fractal analysis of trabecular morphology as animage-derived phenotype in 18,096 UK Biobank participants. We identified 16 significant loci containing genes associatedwith haemodynamic phenotypes and regulation of cytoskeletal arborisation.3,4 Using biomechanical simulations and humanobservational data, we demonstrate that trabecular morphology is an important determinant of cardiac performance. Throughgenetic association studies with cardiac disease phenotypes and Mendelian randomisation, we find a causal relationshipbetween trabecular morphology and cardiovascular disease risk. These findings suggest an unexpected role for myocardialtrabeculae in the function of the adult heart, identify conserved pathways that regulate structural complexity, and reveal theirinfluence on susceptibility to disease

  • Journal article
    Bai W, Suzuki H, Huang J, Francis C, Wang S, Tarroni G, Guitton F, Aung N, Fung K, Petersen SE, Piechnik SK, Neubauer S, Evangelou E, Dehghan A, O'Regan DP, Wilkins MR, Guo Y, Matthews PM, Rueckert Det al., 2020,

    A population-based phenome-wide association study of cardiac and aortic structure and function

    , Nature Medicine, ISSN: 1078-8956

    Differences in cardiac and aortic structure and function are associated with cardiovascular diseases and a wide range of other types of disease. Here we analyzed cardiovascular magnetic resonance images from a population-based study, the UK Biobank, using an automated machine-learning-based analysis pipeline. We report a comprehensive range of structural and functional phenotypes for the heart and aorta across 26,893 participants, and explore how these phenotypes vary according to sex, age and major cardiovascular risk factors. We extended this analysis with a phenome-wide association study, in which we tested for correlations of a wide range of non-imaging phenotypes of the participants with imaging phenotypes. We further explored the associations of imaging phenotypes with early-life factors, mental health and cognitive function using both observational analysis and Mendelian randomization. Our study illustrates how population-based cardiac and aortic imaging phenotypes can be used to better define cardiovascular disease risks as well as heart–brain health interactions, highlighting new opportunities for studying disease mechanisms and developing image-based biomarkers.

  • Journal article
    Falck F, Doshi S, Tormento M, Nersisyan G, Smuts N, Lingi J, Rants K, Saputra RP, Wang K, Kormushev Pet al., 2020,

    Robot DE NIRO: a human-centered, autonomous, mobile research platform for cognitively-enhanced manipulation

    , Frontiers in Robotics and AI, Vol: A17, ISSN: 2296-9144

    We introduceRobot DE NIRO, an autonomous, collaborative, humanoid robot for mobilemanipulation. We built DE NIRO to perform a wide variety of manipulation behaviors, with afocus on pick-and-place tasks. DE NIRO is designed to be used in a domestic environment,especially in support of caregivers working with the elderly. Given this design focus, DE NIRO caninteract naturally, reliably, and safely with humans, autonomously navigate through environmentson command, intelligently retrieve or move target objects, and avoid collisions efficiently. Wedescribe DE NIRO’s hardware and software, including an extensive vision sensor suite of 2Dand 3D LIDARs, a depth camera, and a 360-degree camera rig; two types of custom grippers;and a custom-built exoskeleton called DE VITO. We demonstrate DE NIRO’s manipulationcapabilities in three illustrative challenges: First, we have DE NIRO perform a fetch-an-objectchallenge. Next, we add more cognition to DE NIRO’s object recognition and grasping abilities,confronting it with small objects of unknown shape. Finally, we extend DE NIRO’s capabilitiesinto dual-arm manipulation of larger objects. We put particular emphasis on the features thatenable DE NIRO to interact safely and naturally with humans. Our contribution is in sharinghow a humanoid robot with complex capabilities can be designed and built quickly with off-the-shelf hardware and open-source software. Supplementary material including our code, adocumentation, videos and the CAD models of several hardware parts are openly availableavailable athttps://www.imperial.ac.uk/robot-intelligence/software/

  • Conference paper
    Flageat M, Cully A, 2020,

    Fast and stable MAP-Elites in noisy domains using deep grids

    , 2020 Conference on Artificial Life, Publisher: Massachusetts Institute of Technology, Pages: 273-282

    Quality-Diversity optimisation algorithms enable the evolutionof collections of both high-performing and diverse solutions.These collections offer the possibility to quickly adapt andswitch from one solution to another in case it is not workingas expected. It therefore finds many applications in real-worlddomain problems such as robotic control. However, QD algo-rithms, like most optimisation algorithms, are very sensitive touncertainty on the fitness function, but also on the behaviouraldescriptors. Yet, such uncertainties are frequent in real-worldapplications. Few works have explored this issue in the spe-cific case of QD algorithms, and inspired by the literature inEvolutionary Computation, mainly focus on using samplingto approximate the ”true” value of the performances of a solu-tion. However, sampling approaches require a high number ofevaluations, which in many applications such as robotics, canquickly become impractical.In this work, we propose Deep-Grid MAP-Elites, a variantof the MAP-Elites algorithm that uses an archive of similarpreviously encountered solutions to approximate the perfor-mance of a solution. We compare our approach to previouslyexplored ones on three noisy tasks: a standard optimisationtask, the control of a redundant arm and a simulated Hexapodrobot. The experimental results show that this simple approachis significantly more resilient to noise on the behavioural de-scriptors, while achieving competitive performances in termsof fitness optimisation, and being more sample-efficient thanother existing approaches.

  • Conference paper
    Carvalho EDC, Clark R, Nicastro A, Kelly PHJet al., 2020,

    Scalable uncertainty for computer vision with functional variationalinference

    , CVPR 2020, Publisher: IEEE, Pages: 12003-12013

    As Deep Learning continues to yield successful applications in ComputerVision, the ability to quantify all forms of uncertainty is a paramountrequirement for its safe and reliable deployment in the real-world. In thiswork, we leverage the formulation of variational inference in function space,where we associate Gaussian Processes (GPs) to both Bayesian CNN priors andvariational family. Since GPs are fully determined by their mean and covariancefunctions, we are able to obtain predictive uncertainty estimates at the costof a single forward pass through any chosen CNN architecture and for anysupervised learning task. By leveraging the structure of the induced covariancematrices, we propose numerically efficient algorithms which enable fasttraining in the context of high-dimensional tasks such as depth estimation andsemantic segmentation. Additionally, we provide sufficient conditions forconstructing regression loss functions whose probabilistic counterparts arecompatible with aleatoric uncertainty quantification.

  • Conference paper
    Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2020,

    Argumentation as a framework for interactive explanations for recommendations

    , 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI

    As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

  • Journal article
    Biffi C, Cerrolaza Martinez JJ, Tarroni G, Bai W, Simoes Monteiro de Marvao A, Oktay O, Ledig C, Le Folgoc L, Kamnitsas K, Doumou G, Duan J, Prasad S, Cook S, O'Regan D, Rueckert Det al., 2020,

    Explainable anatomical shape analysis through deep hierarchical generative models

    , IEEE Transactions on Medical Imaging, Vol: 39, Pages: 2088-2099, ISSN: 0278-0062

    Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer’s disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating highthroughput analysis of normal anatomy and pathology in largescale studies of volumetric imaging.

  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2020,

    Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

    , The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
  • Conference paper
    Pardo F, Levdik V, Kormushev P, 2020,

    Scaling all-goals updates in reinforcement learning using convolutional neural networks

    , 34th AAAI Conference on Artificial Intelligence (AAAI 2020), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 5355-5362, ISSN: 2374-3468

    Being able to reach any desired location in the environmentcan be a valuable asset for an agent. Learning a policy to nav-igate between all pairs of states individually is often not fea-sible. Anall-goals updatingalgorithm uses each transitionto learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallellimited the approach to small tabular cases so far. To tacklethis problem we propose to use convolutional network archi-tectures to generate Q-values and updates for a large numberof goals at once. We demonstrate the accuracy and generaliza-tion qualities of the proposed method on randomly generatedmazes and Sokoban puzzles. In the case of on-screen goalcoordinates the resulting mapping from frames todistance-mapsdirectly informs the agent about which places are reach-able and in how many steps. As an example of applicationwe show that replacing the random actions inε-greedy ex-ploration by several actions towards feasible goals generatesbetter exploratory trajectories on Montezuma’s Revenge andSuper Mario All-Stars games.

  • Book
    Deisenroth MP, Faisal AA, Ong CS, 2020,

    Mathematics for Machine Learning

    , Publisher: Cambridge University Press, ISBN: 9781108455145
  • Conference paper
    Saputra RP, Rakicevic N, Kormushev P, 2020,

    Sim-to-real learning for casualty detection from ground projected point cloud data

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

    This paper addresses the problem of human body detection-particularly a human body lying on the ground (a.k.a. casualty)-using point cloud data. This ability to detect a casualty is one of the most important features of mobile rescue robots, in order for them to be able to operate autonomously. We propose a deep-learning-based casualty detection method using a deep convolutional neural network (CNN). This network is trained to be able to detect a casualty using a point-cloud data input. In the method we propose, the point cloud input is pre-processed to generate a depth image-like ground-projected heightmap. This heightmap is generated based on the projected distance of each point onto the detected ground plane within the point cloud data. The generated heightmap-in image form-is then used as an input for the CNN to detect a human body lying on the ground. To train the neural network, we propose a novel sim-to-real approach, in which the network model is trained using synthetic data obtained in simulation and then tested on real sensor data. To make the model transferable to real data implementations, during the training we adopt specific data augmentation strategies with the synthetic training data. The experimental results show that data augmentation introduced during the training process is essential for improving the performance of the trained model on real data. More specifically, the results demonstrate that the data augmentations on raw point-cloud data have contributed to a considerable improvement of the trained model performance.

  • Journal article
    Stimberg M, Goodman D, Nowotny T, 2020,

    Brian2GeNN: accelerating spiking neural network simulations with graphics hardware

    , Scientific Reports, Vol: 10, Pages: 1-12, ISSN: 2045-2322

    “Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNNis a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance gradegraphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems sothat users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technicalknowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brianscripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators.From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown thatusing Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.

  • Journal article
    Baroni P, Toni F, Verheij B, 2020,

    On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games: 25 years later Foreword

    , ARGUMENT & COMPUTATION, Vol: 11, Pages: 1-14, ISSN: 1946-2166
  • Journal article
    Zambelli M, Cully A, Demiris Y, 2020,

    Multimodal representation models for prediction and control from partial information

    , Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

    Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=1&respub-action=search.html Current Millis: 1603774639477 Current Time: Tue Oct 27 04:57:19 GMT 2020