Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

    Zhong Q, Fan X, Luo X, Toni Fet al., 2019,

    An explainable multi-attribute decision model based on argumentation

    , EXPERT SYSTEMS WITH APPLICATIONS, Vol: 117, Pages: 42-61, ISSN: 0957-4174
    Baroni P, Rago A, Toni F, 2019,

    From fine-grained properties to broad principles for gradual argumentation: A principled spectrum

    , International Journal of Approximate Reasoning, Vol: 105, Pages: 252-286, ISSN: 0888-613X

    © 2018 Elsevier Inc. The study of properties of gradual evaluation methods in argumentation has received increasing attention in recent years, with studies devoted to various classes of frameworks/ methods leading to conceptually similar but formally distinct properties in different contexts. In this paper we provide a novel systematic analysis for this research landscape by making three main contributions. First, we identify groups of conceptually related properties in the literature, which can be regarded as based on common patterns and, using these patterns, we evidence that many further novel properties can be considered. Then, we provide a simplifying and unifying perspective for these groups of properties by showing that they are all implied by novel parametric principles of (either strict or non-strict) balance and monotonicity. Finally, we show that (instances of) these principles (and thus the group, literature and novel properties that they imply) are satisfied by several quantitative argumentation formalisms in the literature, thus confirming the principles’ general validity and utility to support a compact, yet comprehensive, analysis of properties of gradual argumentation.

    Kuntz J, Thomas P, Stan G-B, Barahona Met al., 2019,

    The exit time finite state projection scheme: bounding exit distributions and occupation measures of continuous-time Markov chains

    We introduce the exit time finite state projection (ETFSP) scheme, atruncation-based method that yields approximations to the exit distribution andoccupation measure associated with the time of exit from a domain (i.e., thetime of first passage to the complement of the domain) of time-homogeneouscontinuous-time Markov chains. We prove that: (i) the computed approximationsbound the measures from below; (ii) the total variation distances between theapproximations and the measures decrease monotonically as states are added tothe truncation; and (iii) the scheme converges, in the sense that, as thetruncation tends to the entire state space, the total variation distances tendto zero. Furthermore, we give a computable bound on the total variationdistance between the exit distribution and its approximation, and we delineatethe cases in which the bound is sharp. We also revisit the related finite stateprojection scheme and give a comprehensive account of its theoreticalproperties. We demonstrate the use of the ETFSP scheme by applying it to twobiological examples: the computation of the first passage time associated withthe expression of a gene, and the fixation times of competing species subjectto demographic noise.

    Bello G, Dawes T, Duan J, Biffi C, Simoes Monteiro de Marvao A, Howard L, Gibbs S, Wilkins M, Cook S, Rueckert D, O'Regan Det al.,

    Deep learning cardiac motion analysis for human survival prediction

    , Nature Machine Intelligence, ISSN: 2522-5839
    Clarke JM, Warren LR, Arora S, Barahona M, Darzi AWet al., 2018,

    Guiding interoperable electronic health records through patient-sharing networks

    , npj Digital Medicine, Vol: 1, ISSN: 2398-6352

    Effective sharing of clinical information between care providers is a critical component of a safe, efficient health system. National data-sharing systems may be costly, politically contentious and do not reflect local patterns of care delivery. This study examines hospital attendances in England from 2013 to 2015 to identify instances of patient sharing between hospitals. Of 19.6 million patients receiving care from 155 hospital care providers, 130 million presentations were identified. On 14.7 million occasions (12%), patients attended a different hospital to the one they attended on their previous interaction. A network of hospitals was constructed based on the frequency of patient sharing between hospitals which was partitioned using the Louvain algorithm into ten distinct data-sharing communities, improving the continuity of data sharing in such instances from 0 to 65–95%. Locally implemented data-sharing communities of hospitals may achieve effective accessibility of clinical information without a large-scale national interoperable information system.

    Cocarascu O, Toni F, 2018,

    Combining Deep Learning and Argumentative Reasoning for the Analysis of Social Media Textual Content Using Small Data Sets

    , COMPUTATIONAL LINGUISTICS, Vol: 44, Pages: 833-858, ISSN: 0891-2017
    Cyras K, Letsios D, Misener R, Toni Fet al.,

    Argumentation for explainable scheduling

    , Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI

    Mathematical optimization offers highly-effective tools forfinding solutions for problems with well-defined goals, no-tably scheduling. However, optimization solvers are oftenunexplainable black boxes whose solutions are inaccessibleto users and which users cannot interact with. We define anovel paradigm using argumentation to empower the inter-action between optimization solvers and users, supported bytractable explanations which certify or refute solutions. A so-lution can be from a solver or of interest to a user (in thecontext of ’what-if’ scenarios). Specifically, we define argu-mentative and natural language explanations for why a sched-ule is (not) feasible, (not) efficient or (not) satisfying fixeduser decisions, based on models of the fundamental makespanscheduling problem in terms of abstract argumentation frame-works (AFs). We define three types of AFs, whose stableextensions are in one-to-one correspondence with schedulesthat are feasible, efficient and satisfying fixed decisions, re-spectively. We extract the argumentative explanations fromthese AFs and the natural language explanations from the ar-gumentative ones.

    Russo A, Law M, Broda K,

    Representing and learning grammars in answer set programming

    , AAAI-19: Thirty-third AAAI Conference on Artificial Intelligence, Publisher: Association for the Advancement of Artificial Intelligence

    In this paper we introduce an extension of context-free gram-mars calledanswer set grammars(ASGs). These grammarsallow annotations on production rules, written in the lan-guage of Answer Set Programming (ASP), which can expresscontext-sensitive constraints. We investigate the complexityof various classes of ASG with respect to two decision prob-lems: deciding whether a given string belongs to the languageof an ASG and deciding whether the language of an ASG isnon-empty. Specifically, we show that the complexity of thesedecision problems can be lowered by restricting the subset ofthe ASP language used in the annotations. To aid the applica-bility of these grammars to computational problems that re-quire context-sensitive parsers for partially known languages,we propose a learning task for inducing the annotations of anASG. We characterise the complexity of this task and presentan algorithm for solving it. An evaluation of a (prototype)implementation is also discussed

    Russo A, Law M, Broda K,

    AAAI 2019, Proceedings pf the 33rd AAAI Conference on Artificial Intelligence

    , AAAI-19: Thirty-Third AAAI Conference on Artificial intelligence
    Cyras K, Delaney B, Prociuk D, Toni F, Chapman M, Dominguez J, Curcin Vet al., 2018,

    Argumentation for explainable reasoning with conflicting medical recommendations

    , Reasoning with Ambiguous and Conflicting Evidence and Recommendations in Medicine (MedRACER 2018), Pages: 14-22

    Designing a treatment path for a patient suffering from mul-tiple conditions involves merging and applying multiple clin-ical guidelines and is recognised as a difficult task. This isespecially relevant in the treatment of patients with multiplechronic diseases, such as chronic obstructive pulmonary dis-ease, because of the high risk of any treatment change havingpotentially lethal exacerbations. Clinical guidelines are typi-cally designed to assist a clinician in treating a single condi-tion with no general method for integrating them. Addition-ally, guidelines for different conditions may contain mutuallyconflicting recommendations with certain actions potentiallyleading to adverse effects. Finally, individual patient prefer-ences need to be respected when making decisions.In this work we present a description of an integrated frame-work and a system to execute conflicting clinical guidelinerecommendations by taking into account patient specific in-formation and preferences of various parties. Overall, ourframework combines a patient’s electronic health record datawith clinical guideline representation to obtain personalisedrecommendations, uses computational argumentation tech-niques to resolve conflicts among recommendations while re-specting preferences of various parties involved, if any, andyields conflict-free recommendations that are inspectable andexplainable. The system implementing our framework willallow for continuous learning by taking feedback from thedecision makers and integrating it within its pipeline.

    Saputra RP, Kormushev P, 2018,

    Casualty Detection from 3D Point Cloud Data for Autonomous Ground Mobile Rescue Robots

    © 2018 IEEE. One of the most important features of mobile rescue robots is the ability to autonomously detect casualties, i.e. human bodies, which are usually lying on the ground. This paper proposes a novel method for autonomously detecting casualties lying on the ground using obtained 3D point-cloud data from an on-board sensor, such as an RGB-D camera or a 3D LIDAR, on a mobile rescue robot. In this method, the obtained 3D point-cloud data is projected onto the detected ground plane, i.e. floor, within the point cloud. Then, this projected point cloud is converted into a grid-map that is used afterwards as an input for the algorithm to detect human body shapes. The proposed method is evaluated by performing detections of a human dummy, placed in different random positions and orientations, using an on-board RGB-D camera on a mobile rescue robot called ResQbot. To evaluate the robustness of the casualty detection method to different camera angles, the orientation of the camera is set to different angles. The experimental results show that using the point-cloud data from the on-board RGB-D camera, the proposed method successfully detects the casualty in all tested body positions and orientations relative to the on-board camera, as well as in all tested camera angles.

    Dutordoir V, Salimbeni H, Deisenroth M, Hensman Jet al., 2018,

    Gaussian Process Conditional Density Estimation

    Conditional Density Estimation (CDE) models deal with estimating conditionaldistributions. The conditions imposed on the distribution are the inputs of themodel. CDE is a challenging task as there is a fundamental trade-off betweenmodel complexity, representational capacity and overfitting. In this work, wepropose to extend the model's input with latent variables and use Gaussianprocesses (GP) to map this augmented input onto samples from the conditionaldistribution. Our Bayesian approach allows for the modeling of small datasets,but we also provide the machinery for it to be applied to big data usingstochastic variational inference. Our approach can be used to model densitieseven in sparse data regions, and allows for sharing learned structure betweenconditions. We illustrate the effectiveness and wide-reaching applicability ofour model on a variety of real-world problems, such as spatio-temporal densityestimation of taxi drop-offs, non-Gaussian noise modeling, and few-shotlearning on omniglot images.

    Wilson J, Hutter F, Deisenroth MP,

    Maximizing acquisition functions for Bayesian optimization

    , Advances in Neural Information Processing Systems (NIPS) 2018, Publisher: Massachusetts Institute of Technology Press, ISSN: 1049-5258

    Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose characteristics not only facilitate but justify use of greedy approaches for their maximization.

    Schulz C, Toni F, 2018,

    On the responsibility for undecisiveness in preferred and stable labellings in abstract argumentation

    , ARTIFICIAL INTELLIGENCE, Vol: 262, Pages: 301-335, ISSN: 0004-3702
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: A Bipedal Robot with Knee-less Legs and Vertical Hip Sliding Motion

    Sæmundsson S, Hofmann K, Deisenroth MP, 2018,

    Meta reinforcement learning with latent variable Gaussian processes

    , Uncertainty in Artificial Intelligence (UAI) 2018, Publisher: Association for Uncertainty in Artificial Intelligence (AUAI)

    Learning from small data sets is critical inmany practical applications where data col-lection is time consuming or expensive, e.g.,robotics, animal experiments or drug design.Meta learning is one way to increase the dataefficiency of learning algorithms by general-izing learned concepts from a set of trainingtasks to unseen, but related, tasks. Often, thisrelationship between tasks is hard coded or re-lies in some other way on human expertise.In this paper, we frame meta learning as a hi-erarchical latent variable model and infer therelationship between tasks automatically fromdata. We apply our framework in a model-based reinforcement learning setting and showthat our meta-learning model effectively gen-eralizes to novel tasks by identifying how newtasks relate to prior ones from minimal data.This results in up to a60%reduction in theaverage interaction time needed to solve taskscompared to strong baselines.

    Cocarascu O, Cyras K, Toni F, 2018,

    Explanatory predictions with artificial neural networks and argumentation

    , Workshop on Explainable Artificial Intelligence (XAI)

    Data-centric AI has proven successful in severaldomains, but its outputs are often hard to explain.We present an architecture combining ArtificialNeural Networks (ANNs) for feature selection andan instance of Abstract Argumentation (AA) forreasoning to provide effective predictions, explain-able both dialectically and logically. In particular,we train an autoencoder to rank features in input ex-amples, and select highest-ranked features to gen-erate an AA framework that can be used for mak-ing and explaining predictions as well as mappedonto logical rules, which can equivalently be usedfor making predictions and for explaining.Weshow empirically that our method significantly out-performs ANNs and a decision-tree-based methodfrom which logical rules can also be extracted.

    Pardo F, Tavakoli A, Levdik V, Kormushev Pet al., 2018,

    Time limits in reinforcement learning

    , International Conference on Machine Learning, Pages: 4042-4051

    In reinforcement learning, it is common to let anagent interact for a fixed amount of time with itsenvironment before resetting it and repeating theprocess in a series of episodes. The task that theagent has to learn can either be to maximize itsperformance over (i) that fixed period, or (ii) anindefinite period where time limits are only usedduring training to diversify experience. In thispaper, we provide a formal account for how timelimits could effectively be handled in each of thetwo cases and explain why not doing so can causestate-aliasing and invalidation of experience re-play, leading to suboptimal policies and traininginstability. In case (i), we argue that the termi-nations due to time limits are in fact part of theenvironment, and thus a notion of the remainingtime should be included as part of the agent’s in-put to avoid violation of the Markov property. Incase (ii), the time limits are not part of the envi-ronment and are only used to facilitate learning.We argue that this insight should be incorporatedby bootstrapping from the value of the state atthe end of each partial episode. For both cases,we illustrate empirically the significance of ourconsiderations in improving the performance andstability of existing reinforcement learning algo-rithms, showing state-of-the-art results on severalcontrol tasks.

    Altuncu MT, Mayer E, Yaliraki SN, Barahona Met al., 2018,

    From Text to Topics in Healthcare Records: An Unsupervised Graph Partitioning Methodology

    Electronic Healthcare Records contain large volumes of unstructured data,including extensive free text. Yet this source of detailed information oftenremains under-used because of a lack of methodologies to extract interpretablecontent in a timely manner. Here we apply network-theoretical tools to analysefree text in Hospital Patient Incident reports from the National HealthService, to find clusters of documents with similar content in an unsupervisedmanner at different levels of resolution. We combine deep neural networkparagraph vector text-embedding with multiscale Markov Stability communitydetection applied to a sparsified similarity graph of document vectors, andshowcase the approach on incident reports from Imperial College Healthcare NHSTrust, London. The multiscale community structure reveals different levels ofmeaning in the topics of the dataset, as shown by descriptive terms extractedfrom the clusters of records. We also compare a posteriori against hand-codedcategories assigned by healthcare personnel, and show that our approachoutperforms LDA-based models. Our content clusters exhibit good correspondencewith two levels of hand-coded categories, yet they also provide further medicaldetail in certain areas and reveal complementary descriptors of incidentsbeyond the external classification taxonomy.

    Muggleton S, Dai WZ, Sammut C, Tamaddoni-Nezhad A, Wen J, Zhou ZHet al., 2018,

    Meta-Interpretive Learning from noisy images

    , Machine Learning, Vol: 107, Pages: 1097-1118, ISSN: 0885-6125

    Statistical machine learning is widely used in image classification. However, most techniques (1) require many images to achieve high accuracy and (2) do not provide support for reasoning below the level of classification, and so are unable to support secondary reasoning, such as the existence and position of light sources and other objects outside the image. This paper describes an Inductive Logic Programming approach called Logical Vision which overcomes some of these limitations. LV uses Meta-Interpretive Learning (MIL) combined with low-level extraction of high-contrast points sampled from the image to learn recursive logic programs describing the image. In published work LV was demonstrated capable of high-accuracy prediction of classes such as regular polygon from small numbers of images where Support Vector Machines and Convolutional Neural Networks gave near random predictions in some cases. LV has so far only been applied to noise-free, artificially generated images. This paper extends LV by (a) addressing classification noise using a new noise-telerant version of the MIL system Metagol, (b) addressing attribute noise using primitive-level statistical estimators to identify sub-objects in real images, (c) using a wider class of background models representing classical 2D shapes such as circles and ellipses, (d) providing richer learnable background knowledge in the form of a simple but generic recursive theory of light reflection. In our experiments we consider noisy images in both natural science settings and in a RoboCup competition setting. The natural science settings involve identification of the position of the light source in telescopic and microscopic images, while the RoboCup setting involves identification of the position of the ball. Our results indicate that with real images the new noise-robust version of LV using a single example (i.e. one-shot LV) converges to an accuracy at least comparable to a thirty-shot statistical machine learner on bot

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&respub-action=search.html Current Millis: 1550744316080 Current Time: Thu Feb 21 10:18:36 GMT 2019