Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Meyer H, Dawes T, Serrani M, Bai W, Tokarczuk P, Cai J, Simoes Monteiro de Marvao A, Henry A, Lumbers T, Gierten J, Thumberger T, Wittbrodt J, Ware J, Rueckert D, Matthews P, Prasad S, Costantino M, Cook S, Birney E, O'Regan Det al.,

    Genetic and functional insights into the fractal structure of the heart

    , Nature, ISSN: 0028-0836
  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al.,

    Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

    , The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
  • Conference paper
    Pardo F, Levdik V, Kormushev P, 2020,

    Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks

    , 34th AAAI Conference on Artificial Intelligence (AAAI 2020)
  • Book
    Deisenroth MP, Faisal AA, Ong CS, 2020,

    Mathematics for Machine Learning

    , Publisher: Cambridge University Press, ISBN: 9781108455145
  • Journal article
    Stimberg M, Goodman D, Nowotny T, 2020,

    Brian2GeNN: accelerating spiking neural network simulations with graphics hardware

    , Scientific Reports, Vol: 10, Pages: 1-12, ISSN: 2045-2322

    “Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNNis a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance gradegraphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems sothat users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technicalknowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brianscripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators.From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown thatusing Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.

  • Journal article
    Biffi C, Cerrolaza Martinez JJ, Tarroni G, Bai W, Simoes Monteiro de Marvao A, Oktay O, Ledig C, Le Folgoc L, Kamnitsas K, Doumou G, Duan J, Prasad S, Cook S, O'Regan D, Rueckert Det al., 2020,

    Explainable anatomical shape analysis through deep hierarchical generative models

    , IEEE Transactions on Medical Imaging, ISSN: 0278-0062

    Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer’s disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating highthroughput analysis of normal anatomy and pathology in largescale studies of volumetric imaging.

  • Journal article
    Zambelli M, Cully A, Demiris Y, 2020,

    Multimodal representation models for prediction and control from partial information

    , Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

    Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

  • Conference paper
    Jha R, Belardinelli F, Toni F, 2020,

    Formal verification of debates in argumentation theory.

    , Publisher: ACM, Pages: 940-947
  • Journal article
    Rakicevic N, Kormushev P, 2019,

    Active learning via informed search in movement parameter space for efficient robot task learning and transfer

    , Autonomous Robots, Vol: 43, Pages: 1917-1935, ISSN: 0929-5593

    Learning complex physical tasks via trial-and-error is still challenging for high-degree-of-freedom robots. Greatest challenges are devising a suitable objective function that defines the task, and the high sample complexity of learning the task. We propose a novel active learning framework, consisting of decoupled task model and exploration components, which does not require an objective function. The task model is specific to a task and maps the parameter space, defining a trial, to the trial outcome space. The exploration component enables efficient search in the trial-parameter space to generate the subsequent most informative trials, by simultaneously exploiting all the information gained from previous trials and reducing the task model’s overall uncertainty. We analyse the performance of our framework in a simulation environment and further validate it on a challenging bimanual-robot puck-passing task. Results show that the robot successfully acquires the necessary skills after only 100 trials without any prior information about the task or target positions. Decoupling the framework’s components also enables efficient skill transfer to new environments which is validated experimentally.

  • Conference paper
    Saputra RP, Rakicevic N, Kormushev P, 2019,

    Sim-to-real learning for casualty detection from ground projected point cloud data

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

    This paper addresses the problem of human bodydetection—particularly a human body lying on the ground(a.k.a. casualty)—using point cloud data. This ability to detect acasualty is one of the most important features of mobile rescuerobots, in order for them to be able to operate autonomously.We propose a deep-learning-based casualty detection methodusing a deep convolutional neural network (CNN). This networkis trained to be able to detect a casualty using a point-clouddata input. In the method we propose, the point cloud input ispre-processed to generate a depth image-like ground-projectedheightmap. This heightmap is generated based on the projecteddistance of each point onto the detected ground plane within thepoint cloud data. The generated heightmap—in image form—isthen used as an input for the CNN to detect a human bodylying on the ground. To train the neural network, we proposea novel sim-to-real approach, in which the network model istrained using synthetic data obtained in simulation and thentested on real sensor data. To make the model transferableto real data implementations, during the training we adoptspecific data augmentation strategies with the synthetic trainingdata. The experimental results show that data augmentationintroduced during the training process is essential for improvingthe performance of the trained model on real data. Morespecifically, the results demonstrate that the data augmentationson raw point-cloud data have contributed to a considerableimprovement of the trained model performance.

  • Journal article
    Zheng JX, Pawar S, Goodman DFM, 2019,

    Further towards unambiguous edge bundling: Investigating power-confluentdrawings for network visualization

    , IEEE Transactions on Visualization and Computer Graphics, ISSN: 1077-2626

    Bach et al. [1] recently presented an algorithm for constructing confluentdrawings, by leveraging power graph decomposition to generate an auxiliaryrouting graph. We identify two problems with their method and offer a singlesolution to solve both. We also classify the exact type of confluent drawingsthat the algorithm can produce as 'power-confluent', and prove that it is asubclass of the previously studied 'strict confluent' drawing. A descriptionand source code of our implementation is also provided, which additionallyincludes an improved method for power graph construction.

  • Journal article
    Peach R, Yaliraki S, Lefevre D, Barahona Met al., 2019,

    Data-driven unsupervised clustering of online learner behaviour 

    , npj Science of Learning, Vol: 4, ISSN: 2056-7936

    The widespread adoption of online courses opens opportunities for analysing learner behaviour and optimising web-based learning adapted to observed usage. Here we introduce a mathematical framework for the analysis of time series of online learner engagement, which allows the identification of clusters of learners with similar online temporal behaviour directly from the raw data without prescribing a priori subjective reference behaviours. The method uses a dynamic time warping kernel to create a pairwise similarity between time series of learner actions, and combines it with an unsupervised multiscale graph clustering algorithm to identify groups of learners with similar temporal behaviour. To showcase our approach, we analyse task completion data from a cohort of learners taking an online post-graduate degree at Imperial Business School. Our analysis reveals clusters of learners with statistically distinct patterns of engagement, from distributed to massed learning, with different levels of regularity, adherence to pre-planned course structure and task completion. The approach also reveals outlier learners with highly sporadic behaviour. A posteriori comparison against student performance shows that, whereas high performing learners are spread across clusters with diverse temporal engagement, low performers are located significantly in the massed learning cluster, and our unsupervised clustering identifies low performers more accurately than common machine learning classification methods trained on temporal statistics of the data. Finally, we test the applicability of the method by analysing two additional datasets: a different cohort of the same course, and time series of different format from another university.

  • Journal article
    Zheng JX, Pawar S, Goodman DFM, 2019,

    Graph drawing by stochastic gradient descent

    , IEEE Transactions on Visualization and Computer Graphics, Vol: 25, Pages: 2738-2748, ISSN: 1077-2626

    A popular method of force-directed graph drawing is multidimensional scalingusing graph-theoretic distances as input. We present an algorithm to minimizeits energy function, known as stress, by using stochastic gradient descent(SGD) to move a single pair of vertices at a time. Our results show that SGDcan reach lower stress levels faster and more consistently than majorization,without needing help from a good initialization. We then show how the uniqueproperties of SGD make it easier to produce constrained layouts than previousapproaches. We also show how SGD can be directly applied within the sparsestress approximation of Ortmann et al. [1], making the algorithm scalable up tolarge graphs.

  • Conference paper
    Lertvittayakumjorn P, Toni F,

    Human-grounded evaluations of explanation methods for text classification

    , 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Publisher: ACL Anthology

    Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIsand humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2)justifying model predictions, and (3) helping humans investigate uncertain predictions.The results highlight dissimilar qualities of thevarious explanation methods we consider andshow the degree to which these methods couldserve for each purpose.

  • Journal article
    Čyras K, Birch D, Guo Y, Toni F, Dulay R, Turvey S, Greenberg D, Hapuarachchi Tet al., 2019,

    Explanations by arbitrated argumentative dispute

    , Expert Systems with Applications, Vol: 127, Pages: 141-156, ISSN: 0957-4174

    Explaining outputs determined algorithmically by machines is one of the most pressing and studied problems in Artificial Intelligence (AI) nowadays, but the equally pressing problem of using AI to explain outputs determined by humans is less studied. In this paper we advance a novel methodology integrating case-based reasoning and computational argumentation from AI to explain outcomes, determined by humans or by machines, indifferently, for cases characterised by discrete (static) features and/or (dynamic) stages. At the heart of our methodology lies the concept of arbitrated argumentative disputesbetween two fictitious disputants arguing, respectively, for or against a case's output in need of explanation, and where this case acts as an arbiter. Specifically, in explaining the outcome of a case in question, the disputants put forward as arguments relevant cases favouring their respective positions, with arguments/cases conflicting due to their features, stages and outcomes, and the applicability of arguments/cases arbitrated by the features and stages of the case in question. We in addition use arbitrated dispute trees to identify the excess features that help the winning disputant to win the dispute and thus complement the explanation. We evaluate our novel methodology theoretically, proving desirable properties thereof, and empirically, in the context of primary legislation in the United Kingdom (UK), concerning the passage of Bills that may or may not become laws. High-level factors underpinning a Bill's passage are its content-agnostic features such as type, number of sponsors, ballot order, as well as the UK Parliament's rules of conduct. Given high numbers of proposed legislation (hundreds of Bills a year), it is hard even for legal experts to explain on a large scale why certain Bills pass or not. We show how our methodology can address this problem by automatically providing high-level explanations of why Bills pass or not, based on the given Bills and the

  • Conference paper
    Čyras K, Letsios D, Misener R, Toni Fet al., 2019,

    Argumentation for explainable scheduling

    , Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI, Pages: 2752-2759

    Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.

  • Journal article
    Johnston I, Hoffmann T, Greenbury S, Cominetti O, Jallow M, Kwiatkowski D, Barahona M, Jones N, Casals-Pascual Cet al., 2019,

    Precision identification of high-risk phenotypes and progression pathways in severe malaria without requiring longitudinal data

    , npj Digital Medicine, Vol: 2, ISSN: 2398-6352

    More than 400,000 deaths from severe malaria (SM) are reported every year, mainly in African children. The diversity of clinical presentations associated with SM indicates important differences in disease pathogenesis that require specific treatment, and this clinical heterogeneity of SM remains poorly understood. Here, we apply tools from machine learning and model-based inference to harness large-scale data and dissect the heterogeneity in patterns of clinical features associated with SM in 2904 Gambian children admitted to hospital with malaria. This quantitative analysis reveals features predicting the severity of individual patient outcomes, and the dynamic pathways of SM progression, notably inferred without requiring longitudinal observations. Bayesian inference of these pathways allows us assign quantitative mortality risks to individual patients. By independently surveying expert practitioners, we show that this data-driven approach agrees with and expands the current state of knowledge on malaria progression, while simultaneously providing a data-supported framework for predicting clinical risk.

  • Conference paper
    AlAttar A, Rouillard L, Kormushev P, 2019,

    Autonomous air-hockey playing cobot using optimal control and vision-based Bayesian tracking

    , Towards Autonomous Robotic Systems, Publisher: Springer, ISSN: 0302-9743

    This paper presents a novel autonomous air-hockey playing collaborative robot (cobot) that provides human-like gameplay against human opponents. Vision-based Bayesian tracking of the puck and striker are used in an Analytic Hierarchy Process (AHP)-based probabilistic tactical layer for high-speed perception. The tactical layer provides commands for an active control layer that controls the Cartesian position and yaw angle of a custom end effector. The active layer uses optimal control of the cobot’s posture inside the task nullspace. The kinematic redundancy is resolved using a weighted Moore-Penrose pseudo-inversion technique. Experiments with human players show high-speed human-like gameplay with potential applications in the growing field of entertainment robotics.

  • Conference paper
    Falck F, Larppichet K, Kormushev P, 2019,

    DE VITO: A dual-arm, high degree-of-freedom, lightweight, inexpensive, passive upper-limb exoskeleton for robot teleoperation

    , TAROS: Annual Conference Towards Autonomous Robotic Systems, Publisher: Springer, ISSN: 0302-9743

    While robotics has made significant advances in perception, planning and control in recent decades, the vast majority of tasks easily completed by a human, especially acting in dynamic, unstructured environments, are far from being autonomously performed by a robot. Teleoperation, remotely controlling a slave robot by a human operator, can be a realistic, complementary transition solution that uses the motion intelligence of a human in complex tasks while exploiting the robot’s autonomous reliability and precision in less challenging situations.We introduce DE VITO, a seven degree-of-freedom, dual-arm upper-limb exoskeleton that passively measures the pose of a human arm. DE VITO is a lightweight, simplistic and energy-efficient design with a total material cost of at least an order of magnitude less than previous work. Given the estimated human pose, we implement both joint and Cartesian space kinematic control algorithms and present qualitative experimental results on various complex manipulation tasks teleoperating Robot DE NIRO, a research platform for mobile manipulation, that demonstrate the functionality of DE VITO. We provide the CAD models, open-source code and supplementary videos of DE VITO at http://www.imperial.ac.uk/robot-intelligence/robots/de_vito/.

  • Journal article
    Schaub MT, Delvenne JC, Lambiotte R, Barahona Met al., 2019,

    Multiscale dynamical embeddings of complex networks

    , Physical Review E, Vol: 99, Pages: 062308-1-062308-18, ISSN: 1539-3755

    Complex systems and relational data are often abstracted as dynamical processes on networks. To understand, predict, and control their behavior, a crucial step is to extract reduced descriptions of such networks. Inspired by notions from control theory, we propose a time-dependent dynamical similarity measure between nodes, which quantifies the effect a node-input has on the network. This dynamical similarity induces an embedding that can be employed for several analysis tasks. Here we focus on (i) dimensionality reduction, i.e., projecting nodes onto a low-dimensional space that captures dynamic similarity at different timescales, and (ii) how to exploit our embeddings to uncover functional modules. We exemplify our ideas through case studies focusing on directed networks without strong connectivity and signed networks. We further highlight how certain ideas from community detection can be generalized and linked to control theory, by using the here developed dynamical perspective.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=1&respub-action=search.html Current Millis: 1591285462944 Current Time: Thu Jun 04 16:44:22 BST 2020