Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Carvalho EDC, Clark R, Nicastro A, Kelly PHJet al., 2020,

    Scalable uncertainty for computer vision with functional variationalinference

    , CVPR 2020, Publisher: IEEE, Pages: 12003-12013

    As Deep Learning continues to yield successful applications in ComputerVision, the ability to quantify all forms of uncertainty is a paramountrequirement for its safe and reliable deployment in the real-world. In thiswork, we leverage the formulation of variational inference in function space,where we associate Gaussian Processes (GPs) to both Bayesian CNN priors andvariational family. Since GPs are fully determined by their mean and covariancefunctions, we are able to obtain predictive uncertainty estimates at the costof a single forward pass through any chosen CNN architecture and for anysupervised learning task. By leveraging the structure of the induced covariancematrices, we propose numerically efficient algorithms which enable fasttraining in the context of high-dimensional tasks such as depth estimation andsemantic segmentation. Additionally, we provide sufficient conditions forconstructing regression loss functions whose probabilistic counterparts arecompatible with aleatoric uncertainty quantification.

  • Conference paper
    Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2020,

    Argumentation as a framework for interactive explanations for recommendations

    , 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI

    As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

  • Journal article
    Biffi C, Cerrolaza Martinez JJ, Tarroni G, Bai W, Simoes Monteiro de Marvao A, Oktay O, Ledig C, Le Folgoc L, Kamnitsas K, Doumou G, Duan J, Prasad S, Cook S, O'Regan D, Rueckert Det al., 2020,

    Explainable anatomical shape analysis through deep hierarchical generative models

    , IEEE Transactions on Medical Imaging, Vol: 39, Pages: 2088-2099, ISSN: 0278-0062

    Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer’s disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating highthroughput analysis of normal anatomy and pathology in largescale studies of volumetric imaging.

  • Journal article
    Baroni P, Toni F, Verheij B, 2020,

    On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games: 25 years later Foreword

    , ARGUMENT & COMPUTATION, Vol: 11, Pages: 1-14, ISSN: 1946-2166
  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2020,

    Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

    , The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
  • Journal article
    Tsai Y-Y, Xiao B, Johns E, Yang G-Zet al., 2020,

    Constrained-Space Optimization and Reinforcement Learning for Complex Tasks

    , IEEE ROBOTICS AND AUTOMATION LETTERS, Vol: 5, Pages: 683-690, ISSN: 2377-3766
  • Conference paper
    Pardo F, Levdik V, Kormushev P, 2020,

    Scaling all-goals updates in reinforcement learning using convolutional neural networks

    , 34th AAAI Conference on Artificial Intelligence (AAAI 2020), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 5355-5362, ISSN: 2374-3468

    Being able to reach any desired location in the environmentcan be a valuable asset for an agent. Learning a policy to nav-igate between all pairs of states individually is often not fea-sible. Anall-goals updatingalgorithm uses each transitionto learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallellimited the approach to small tabular cases so far. To tacklethis problem we propose to use convolutional network archi-tectures to generate Q-values and updates for a large numberof goals at once. We demonstrate the accuracy and generaliza-tion qualities of the proposed method on randomly generatedmazes and Sokoban puzzles. In the case of on-screen goalcoordinates the resulting mapping from frames todistance-mapsdirectly informs the agent about which places are reach-able and in how many steps. As an example of applicationwe show that replacing the random actions inε-greedy ex-ploration by several actions towards feasible goals generatesbetter exploratory trajectories on Montezuma’s Revenge andSuper Mario All-Stars games.

  • Book
    Deisenroth MP, Faisal AA, Ong CS, 2020,

    Mathematics for Machine Learning

    , Publisher: Cambridge University Press, ISBN: 9781108455145
  • Conference paper
    Saputra RP, Rakicevic N, Kormushev P, 2020,

    Sim-to-real learning for casualty detection from ground projected point cloud data

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

    This paper addresses the problem of human body detection-particularly a human body lying on the ground (a.k.a. casualty)-using point cloud data. This ability to detect a casualty is one of the most important features of mobile rescue robots, in order for them to be able to operate autonomously. We propose a deep-learning-based casualty detection method using a deep convolutional neural network (CNN). This network is trained to be able to detect a casualty using a point-cloud data input. In the method we propose, the point cloud input is pre-processed to generate a depth image-like ground-projected heightmap. This heightmap is generated based on the projected distance of each point onto the detected ground plane within the point cloud data. The generated heightmap-in image form-is then used as an input for the CNN to detect a human body lying on the ground. To train the neural network, we propose a novel sim-to-real approach, in which the network model is trained using synthetic data obtained in simulation and then tested on real sensor data. To make the model transferable to real data implementations, during the training we adopt specific data augmentation strategies with the synthetic training data. The experimental results show that data augmentation introduced during the training process is essential for improving the performance of the trained model on real data. More specifically, the results demonstrate that the data augmentations on raw point-cloud data have contributed to a considerable improvement of the trained model performance.

  • Conference paper
    Ding Z, Lepora N, Johns E, 2020,

    Sim-to-real transfer for optical tactile sensing

    , IEEE International Conference on Robotics and Automation, Publisher: IEEE, ISSN: 2152-4092

    Deep learning and reinforcement learning meth-ods have been shown to enable learning of flexible and complexrobot controllers. However, the reliance on large amounts oftraining data often requires data collection to be carried outin simulation, with a number of sim-to-real transfer methodsbeing developed in recent years. In this paper, we study thesetechniques for tactile sensing using the TacTip optical tactilesensor, which consists of a deformable tip with a cameraobserving the positions of pins inside this tip. We designeda model for soft body simulation which was implemented usingthe Unity physics engine, and trained a neural network topredict the locations and angles of edges when in contact withthe sensor. Using domain randomisation techniques for sim-to-real transfer, we show how this framework can be used toaccurately predict edges with less than 1 mm prediction errorin real-world testing, without any real-world data at all.

  • Journal article
    Stimberg M, Goodman D, Nowotny T, 2020,

    Brian2GeNN: accelerating spiking neural network simulations with graphics hardware

    , Scientific Reports, Vol: 10, Pages: 1-12, ISSN: 2045-2322

    “Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNNis a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance gradegraphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems sothat users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technicalknowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brianscripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators.From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown thatusing Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.

  • Conference paper
    Johns E, Liu S, Davison A, 2020,

    End-to-end multi-task learning with attention

    , The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, Publisher: IEEE

    We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.

  • Conference paper
    Čyras K, Karamlou A, Lee M, Letsios D, Misener R, Toni Fet al., 2020,

    AI-assisted schedule explainer for nurse rostering

    , Pages: 2101-2103, ISSN: 1548-8403

    © 2020 International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). All rights reserved. We present an argumentation-supported explanation generating system, called Schedule Explainer, that assists with makespan scheduling. Our stand-alone generic tool explains to a lay user why a resource allocation schedule is good or not, and offers actions to improve the schedule given the user's constraints. Schedule Explainer provides actionable textual explanations via an interactive graphical interface. We illustrate our system with a proof-of-concept application tool in a nurse rostering scenario whereby a shift-lead nurse aims to account for unexpected events by rescheduling some patient procedures to nurses and is aided by the system to do so.

  • Book chapter
    Cocarascu O, Toni F, 2020,

    Deploying Machine Learning Classifiers for Argumentative Relations “in the Wild”

    , Argumentation Library, Pages: 269-285

    © 2020, Springer Nature Switzerland AG. Argument Mining (AM) aims at automatically identifying arguments and components of arguments in text, as well as at determining the relations between these arguments, on various annotated corpora using machine learning techniques (Lippi & Torroni, 2016).

  • Journal article
    Zambelli M, Cully A, Demiris Y, 2020,

    Multimodal representation models for prediction and control from partial information

    , Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

    Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

  • Conference paper
    Cocarascu O, Cabrio E, Villata S, Toni Fet al., 2020,

    Dataset Independent Baselines for Relation Prediction in Argument Mining.

    , Publisher: IOS Press, Pages: 45-52
  • Journal article
    Albini E, Lertvittayakumjorn P, Rago A, Toni Fet al., 2020,

    DAX: Deep Argumentative eXplanation for Neural Networks.

    , CoRR, Vol: abs/2012.05766
  • Journal article
    Rago A, Albini E, Baroni P, Toni Fet al., 2020,

    Influence-Driven Explanations for Bayesian Network Classifiers.

    , CoRR, Vol: abs/2012.05773
  • Conference paper
    Jha R, Belardinelli F, Toni F, 2020,

    Formal verification of debates in argumentation theory.

    , Publisher: ACM, Pages: 940-947
  • Conference paper
    Liu S, Davison A, Johns E, 2019,

    Self-supervised generalisation with meta auxiliary learning

    , 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Publisher: Neural Information Processing Systems Foundation, Inc.

    Learning with auxiliary tasks can improve the ability of a primary task to generalise.However, this comes at the cost of manually labelling auxiliary data. We propose anew method which automatically learns appropriate labels for an auxiliary task,such that any supervised learning task can be improved without requiring access toany further data. The approach is to train two neural networks: a label-generationnetwork to predict the auxiliary labels, and a multi-task network to train theprimary task alongside the auxiliary task. The loss for the label-generation networkincorporates the loss of the multi-task network, and so this interaction between thetwo networks can be seen as a form of meta learning with a double gradient. Weshow that our proposed method, Meta AuXiliary Learning (MAXL), outperformssingle-task learning on 7 image datasets, without requiring any additional data.We also show that MAXL outperforms several other baselines for generatingauxiliary labels, and is even competitive when compared with human-definedauxiliary labels. The self-supervised nature of our method leads to a promisingnew direction towards automated generalisation. Source code can be found athttps://github.com/lorenmt/maxl.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=2&respub-action=search.html Current Millis: 1614367824461 Current Time: Fri Feb 26 19:30:24 GMT 2021