Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Nurek M, Delaney BC, Kostopoulou O, 2020,

    Risk assessment and antibiotic prescribing decisions in children presenting to UK primary care with cough: a vignette study

    , BMJ Open, Vol: 10, ISSN: 2044-6055

    Objectives: The validated “STARWAVe” clinical prediction rule (CPR) uses seven variables to guide risk assessment and antimicrobial stewardship in children presenting with cough(Short illness duration, Temperature, Age, Recession, Wheeze, Asthma,Vomiting). We aimed to compare General Practitioners’ (GPs) risk assessments and prescribing decisions to those of STARWAVe, and assess the influence of the CPR’s clinical variables. Setting: Primary care. Participants: 252 GPs, currently practising in the UK. Design: GPs were randomly assigned to view four (of a possible eight) clinical vignettes online. Each vignette depicted a child presenting with cough, who was described in terms of the seven STARWAVe variables. Systematically, we manipulated patient age (20 months vs. 5 years), illness duration (3 vs. 6 days),vomiting (present vs. absent) and wheeze (present vs. absent), holding the remaining STARWAVe variables constant. Outcome measures:Per vignette, GPs assessed risk of hospitalisation and indicated whether they would prescribe antibiotics or not. Results: GPs overestimated risk of hospitalisationin 9% of vignette presentations (88/1008) and underestimated it in 46% (459/1008). Despite underestimating risk, they overprescribed: 78% of prescriptions were unnecessary relative to GPs’ own risk assessments (121/156), while 83% were unnecessary relativeto STARWAVe risk assessments (130/156). All four of the manipulated variables influenced risk assessments, but only three influenced prescribing decisions: a shorter illness duration reduced prescribing odds (OR 0.14, 95% CI 0.08-0.27, p<0.001), while vomiting and wheeze increased them (ORvomit2.17, 95% CI 1.32-3.57, p=0.002; ORwheeze8.98, 95% CI 4.99-16.15, p<0.001). Conclusions: Relative to STARWAVe, GPs underestimated riskof hospitalisation, overprescribed, and appeared to

  • Conference paper
    Flageat M, Cully A, 2020,

    Fast and stable MAP-Elites in noisy domains using deep grids

    , 2020 Conference on Artificial Life, Publisher: Massachusetts Institute of Technology, Pages: 273-282

    Quality-Diversity optimisation algorithms enable the evolutionof collections of both high-performing and diverse solutions.These collections offer the possibility to quickly adapt andswitch from one solution to another in case it is not workingas expected. It therefore finds many applications in real-worlddomain problems such as robotic control. However, QD algo-rithms, like most optimisation algorithms, are very sensitive touncertainty on the fitness function, but also on the behaviouraldescriptors. Yet, such uncertainties are frequent in real-worldapplications. Few works have explored this issue in the spe-cific case of QD algorithms, and inspired by the literature inEvolutionary Computation, mainly focus on using samplingto approximate the ”true” value of the performances of a solu-tion. However, sampling approaches require a high number ofevaluations, which in many applications such as robotics, canquickly become impractical.In this work, we propose Deep-Grid MAP-Elites, a variantof the MAP-Elites algorithm that uses an archive of similarpreviously encountered solutions to approximate the perfor-mance of a solution. We compare our approach to previouslyexplored ones on three noisy tasks: a standard optimisationtask, the control of a redundant arm and a simulated Hexapodrobot. The experimental results show that this simple approachis significantly more resilient to noise on the behavioural de-scriptors, while achieving competitive performances in termsof fitness optimisation, and being more sample-efficient thanother existing approaches.

  • Conference paper
    Carvalho EDC, Clark R, Nicastro A, Kelly PHJet al., 2020,

    Scalable uncertainty for computer vision with functional variationalinference

    , CVPR 2020, Publisher: IEEE, Pages: 12003-12013

    As Deep Learning continues to yield successful applications in ComputerVision, the ability to quantify all forms of uncertainty is a paramountrequirement for its safe and reliable deployment in the real-world. In thiswork, we leverage the formulation of variational inference in function space,where we associate Gaussian Processes (GPs) to both Bayesian CNN priors andvariational family. Since GPs are fully determined by their mean and covariancefunctions, we are able to obtain predictive uncertainty estimates at the costof a single forward pass through any chosen CNN architecture and for anysupervised learning task. By leveraging the structure of the induced covariancematrices, we propose numerically efficient algorithms which enable fasttraining in the context of high-dimensional tasks such as depth estimation andsemantic segmentation. Additionally, we provide sufficient conditions forconstructing regression loss functions whose probabilistic counterparts arecompatible with aleatoric uncertainty quantification.

  • Journal article
    Biffi C, Cerrolaza Martinez JJ, Tarroni G, Bai W, Simoes Monteiro de Marvao A, Oktay O, Ledig C, Le Folgoc L, Kamnitsas K, Doumou G, Duan J, Prasad S, Cook S, O'Regan D, Rueckert Det al., 2020,

    Explainable anatomical shape analysis through deep hierarchical generative models

    , IEEE Transactions on Medical Imaging, Vol: 39, Pages: 2088-2099, ISSN: 0278-0062

    Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer’s disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating highthroughput analysis of normal anatomy and pathology in largescale studies of volumetric imaging.

  • Journal article
    Baroni P, Toni F, Verheij B, 2020,

    On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games: 25 years later Foreword

    , ARGUMENT & COMPUTATION, Vol: 11, Pages: 1-14, ISSN: 1946-2166
  • Conference paper
    Čyras K, Karamlou A, Lee M, Letsios D, Misener R, Toni Fet al., 2020,

    AI-assisted schedule explainer for nurse rostering

    , AAMAS, Pages: 2101-2103, ISSN: 1548-8403

    We present an argumentation-supported explanation generating system, called Schedule Explainer, that assists with makespan scheduling. Our stand-alone generic tool explains to a lay user why a resource allocation schedule is good or not, and offers actions to improve the schedule given the user's constraints. Schedule Explainer provides actionable textual explanations via an interactive graphical interface. We illustrate our system with a proof-of-concept application tool in a nurse rostering scenario whereby a shift-lead nurse aims to account for unexpected events by rescheduling some patient procedures to nurses and is aided by the system to do so.

  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2020,

    Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

    , The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)
  • Journal article
    Tsai Y-Y, Xiao B, Johns E, Yang G-Zet al., 2020,

    Constrained-space optimization and reinforcement learning for complex tasks

    , IEEE Robotics and Automation Letters, Vol: 5, Pages: 683-690, ISSN: 2377-3766

    Learning from demonstration is increasingly used for transferring operator manipulation skills to robots. In practice, it is important to cater for limited data and imperfect human demonstrations, as well as underlying safety constraints. This article presents a constrained-space optimization and reinforcement learning scheme for managing complex tasks. Through interactions within the constrained space, the reinforcement learning agent is trained to optimize the manipulation skills according to a defined reward function. After learning, the optimal policy is derived from the well-trained reinforcement learning agent, which is then implemented to guide the robot to conduct tasks that are similar to the experts' demonstrations. The effectiveness of the proposed method is verified with a robotic suturing task, demonstrating that the learned policy outperformed the experts' demonstrations in terms of the smoothness of the joint motion and end-effector trajectories, as well as the overall task completion time.

  • Journal article
    Dur TH, Arcucci R, Mottet L, Molina Solana M, Pain C, Guo Y-Ket al., 2020,

    Weak Constraint Gaussian Processes for optimal sensor placement

    , JOURNAL OF COMPUTATIONAL SCIENCE, Vol: 42, ISSN: 1877-7503
  • Journal article
    Wu P, Sun J, Chang X, Zhang W, Arcucci R, Guo Y, Pain CCet al., 2020,

    Data-driven reduced order model with temporal convolutional neural network

    , COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, Vol: 360, ISSN: 0045-7825
  • Journal article
    Calvo RA, Peters D, Cave S, 2020,

    Advancing impact assessment for intelligent systems

    , Nature Machine Intelligence, Vol: 2, Pages: 89-91, ISSN: 2522-5839
  • Conference paper
    Pardo F, Levdik V, Kormushev P, 2020,

    Scaling all-goals updates in reinforcement learning using convolutional neural networks

    , 34th AAAI Conference on Artificial Intelligence (AAAI 2020), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 5355-5362, ISSN: 2374-3468

    Being able to reach any desired location in the environmentcan be a valuable asset for an agent. Learning a policy to nav-igate between all pairs of states individually is often not fea-sible. Anall-goals updatingalgorithm uses each transitionto learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallellimited the approach to small tabular cases so far. To tacklethis problem we propose to use convolutional network archi-tectures to generate Q-values and updates for a large numberof goals at once. We demonstrate the accuracy and generaliza-tion qualities of the proposed method on randomly generatedmazes and Sokoban puzzles. In the case of on-screen goalcoordinates the resulting mapping from frames todistance-mapsdirectly informs the agent about which places are reach-able and in how many steps. As an example of applicationwe show that replacing the random actions inε-greedy ex-ploration by several actions towards feasible goals generatesbetter exploratory trajectories on Montezuma’s Revenge andSuper Mario All-Stars games.

  • Book
    Deisenroth MP, Faisal AA, Ong CS, 2020,

    Mathematics for Machine Learning

    , Publisher: Cambridge University Press, ISBN: 9781108455145
  • Conference paper
    Saputra RP, Rakicevic N, Kormushev P, 2020,

    Sim-to-real learning for casualty detection from ground projected point cloud data

    , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Publisher: IEEE

    This paper addresses the problem of human body detection-particularly a human body lying on the ground (a.k.a. casualty)-using point cloud data. This ability to detect a casualty is one of the most important features of mobile rescue robots, in order for them to be able to operate autonomously. We propose a deep-learning-based casualty detection method using a deep convolutional neural network (CNN). This network is trained to be able to detect a casualty using a point-cloud data input. In the method we propose, the point cloud input is pre-processed to generate a depth image-like ground-projected heightmap. This heightmap is generated based on the projected distance of each point onto the detected ground plane within the point cloud data. The generated heightmap-in image form-is then used as an input for the CNN to detect a human body lying on the ground. To train the neural network, we propose a novel sim-to-real approach, in which the network model is trained using synthetic data obtained in simulation and then tested on real sensor data. To make the model transferable to real data implementations, during the training we adopt specific data augmentation strategies with the synthetic training data. The experimental results show that data augmentation introduced during the training process is essential for improving the performance of the trained model on real data. More specifically, the results demonstrate that the data augmentations on raw point-cloud data have contributed to a considerable improvement of the trained model performance.

  • Journal article
    Stimberg M, Goodman D, Nowotny T, 2020,

    Brian2GeNN: accelerating spiking neural network simulations with graphics hardware

    , Scientific Reports, Vol: 10, Pages: 1-12, ISSN: 2045-2322

    “Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNNis a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance gradegraphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems sothat users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technicalknowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brianscripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators.From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown thatusing Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.

  • Conference paper
    Johns E, Liu S, Davison A, 2020,

    End-to-end multi-task learning with attention

    , The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, Publisher: IEEE

    We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.

  • Book chapter
    Cocarascu O, Toni F, 2020,

    Deploying Machine Learning Classifiers for Argumentative Relations “in the Wild”

    , Argumentation Library, Pages: 269-285

    Argument Mining (AM) aims at automatically identifying arguments and components of arguments in text, as well as at determining the relations between these arguments, on various annotated corpora using machine learning techniques (Lippi & Torroni, 2016).

  • Conference paper
    Nadler P, Arcucci R, Guo Y, 2020,

    An Econophysical Analysis of the Blockchain Ecosystem

    , Pages: 27-42, ISSN: 2198-7246

    We propose a novel modelling approach for the cryptocurrency ecosystem. We model on-chain and off-chain interactions as econophysical systems and employ methods from physical sciences to conduct interpretation of latent parameters describing the cryptocurrency ecosystem as well as to generate predictions. We work with an extracted dataset from the Ethereum blockchain which we combine with off-chain data from exchanges. This allows us to study a large part of the transaction flows related to the cryptocurrency ecosystem. From this aggregate system view we deduct that movements on the blockchain and price and trading action on exchanges are interrelated. The relationship is one directional: On-chain token flows towards exchanges have little effect on prices and trading volume, but changes in price and volume affect the flow of tokens towards the exchange.

  • Journal article
    Zambelli M, Cully A, Demiris Y, 2020,

    Multimodal representation models for prediction and control from partial information

    , Robotics and Autonomous Systems, Vol: 123, ISSN: 0921-8890

    Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks.

  • Conference paper
    Cocarascu O, Cabrio E, Villata S, Toni Fet al., 2020,

    Dataset Independent Baselines for Relation Prediction in Argument Mining.

    , Publisher: IOS Press, Pages: 45-52

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=10&respub-action=search.html Current Millis: 1713556049339 Current Time: Fri Apr 19 20:47:29 BST 2024