Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Dutordoir V, Salimbeni HR, Hensman J, Deisenroth MPet al., 2018,

    Gaussian process conditional density estimation

    , Advances in Neural Information Processing Systems, Publisher: Neural Information Processing Systems Conference

    Conditional Density Estimation (CDE) models deal with estimating conditional distributions. The conditions imposed on the distribution are the inputs of the model. CDE is a challenging task as there is a fundamental trade-off between model complexity, representational capacity and overfitting. In this work, we propose to extend the model's input with latent variables and use Gaussian processes (GP) to map this augmented input onto samples from the conditional distribution. Our Bayesian approach allows for the modeling of small datasets, but we also provide the machinery for it to be applied to big data using stochastic variational inference. Our approach can be used to model densities even in sparse data regions, and allows for sharing learned structure between conditions. We illustrate the effectiveness and wide-reaching applicability of our model on a variety of real-world problems, such as spatio-temporal density estimation of taxi drop-offs, non-Gaussian noise modeling, and few-shot learning on omniglot images.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: A Bipedal Robot with Knee-less Legs and Vertical Hip Sliding Motion

    , 21st International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR 2018)
  • Conference paper
    Čyras K, Letsios D, Misener R, Toni Fet al.,

    Argumentation for explainable scheduling

    , Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI

    Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.

  • Conference paper
    Russo A, Law M, Broda K,

    Representing and learning grammars in answer set programming

    , AAAI-19: Thirty-third AAAI Conference on Artificial Intelligence, Publisher: Association for the Advancement of Artificial Intelligence

    In this paper we introduce an extension of context-free gram-mars calledanswer set grammars(ASGs). These grammarsallow annotations on production rules, written in the lan-guage of Answer Set Programming (ASP), which can expresscontext-sensitive constraints. We investigate the complexityof various classes of ASG with respect to two decision prob-lems: deciding whether a given string belongs to the languageof an ASG and deciding whether the language of an ASG isnon-empty. Specifically, we show that the complexity of thesedecision problems can be lowered by restricting the subset ofthe ASP language used in the annotations. To aid the applica-bility of these grammars to computational problems that re-quire context-sensitive parsers for partially known languages,we propose a learning task for inducing the annotations of anASG. We characterise the complexity of this task and presentan algorithm for solving it. An evaluation of a (prototype)implementation is also discussed

  • Conference paper
    Russo A, Law M, Broda K,

    AAAI 2019, Proceedings pf the 33rd AAAI Conference on Artificial Intelligence

    , AAAI-19: Thirty-Third AAAI Conference on Artificial intelligence
  • Conference paper
    Cyras K, Delaney B, Prociuk D, Toni F, Chapman M, Dominguez J, Curcin Vet al., 2018,

    Argumentation for explainable reasoning with conflicting medical recommendations

    , Reasoning with Ambiguous and Conflicting Evidence and Recommendations in Medicine (MedRACER 2018), Pages: 14-22

    Designing a treatment path for a patient suffering from mul-tiple conditions involves merging and applying multiple clin-ical guidelines and is recognised as a difficult task. This isespecially relevant in the treatment of patients with multiplechronic diseases, such as chronic obstructive pulmonary dis-ease, because of the high risk of any treatment change havingpotentially lethal exacerbations. Clinical guidelines are typi-cally designed to assist a clinician in treating a single condi-tion with no general method for integrating them. Addition-ally, guidelines for different conditions may contain mutuallyconflicting recommendations with certain actions potentiallyleading to adverse effects. Finally, individual patient prefer-ences need to be respected when making decisions.In this work we present a description of an integrated frame-work and a system to execute conflicting clinical guidelinerecommendations by taking into account patient specific in-formation and preferences of various parties. Overall, ourframework combines a patient’s electronic health record datawith clinical guideline representation to obtain personalisedrecommendations, uses computational argumentation tech-niques to resolve conflicts among recommendations while re-specting preferences of various parties involved, if any, andyields conflict-free recommendations that are inspectable andexplainable. The system implementing our framework willallow for continuous learning by taking feedback from thedecision makers and integrating it within its pipeline.

  • Conference paper
    Wilson J, Hutter F, Deisenroth MP,

    Maximizing acquisition functions for Bayesian optimization

    , Advances in Neural Information Processing Systems (NIPS) 2018, Publisher: Massachusetts Institute of Technology Press, ISSN: 1049-5258

    Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose characteristics not only facilitate but justify use of greedy approaches for their maximization.

  • Conference paper
    Salimbeni HR, Cheng C-A, Boots B, Deisenroth MPet al.,

    Orthogonally decoupled variational Gaussian processes

    , Advances in Neural Information Processing Systems (NIPS) 2018, Publisher: Massachusetts Institute of Technology Press, ISSN: 1049-5258

    Gaussian processes (GPs) provide a powerful non-parametric framework for rea-soning over functions. Despite appealing theory, its superlinear computational andmemory complexities have presented a long-standing challenge. State-of-the-artsparse variational inference methods trade modeling accuracy against complexity.However, the complexities of these methods still scale superlinearly in the numberof basis functions, implying that that sparse GP methods are able to learn fromlarge datasets only when a small model is used. Recently, a decoupled approachwas proposed that removes the unnecessary coupling between the complexitiesof modeling the mean and the covariance functions of a GP. It achieves a linearcomplexity in the number of mean parameters, so an expressive posterior meanfunction can be modeled. While promising, this approach suffers from optimizationdifficulties due to ill-conditioning and non-convexity. In this work, we propose analternative decoupled parametrization. It adopts an orthogonal basis in the meanfunction to model the residues that cannot be learned by the standard coupled ap-proach. Therefore, our method extends, rather than replaces, the coupled approachto achieve strictly better performance. This construction admits a straightforwardnatural gradient update rule, so the structure of the information manifold that islost during decoupling can be leveraged to speed up learning. Empirically, ouralgorithm demonstrates significantly faster convergence in multiple experiments.

  • Journal article
    Schulz C, Toni F, 2018,

    On the responsibility for undecisiveness in preferred and stable labellings in abstract argumentation

    , Artificial Intelligence, Vol: 262, Pages: 301-335, ISSN: 1872-7921

    Different semantics of abstract Argumentation Frameworks (AFs) provide different levels of decisiveness for reasoning about the acceptability of conflicting arguments. The stable semantics is useful for applications requiring a high level of decisiveness, as it assigns to each argument the label “accepted” or the label “rejected”. Unfortunately, stable labellings are not guaranteed to exist, thus raising the question as to which parts of AFs are responsible for the non-existence. In this paper, we address this question by investigating a more general question concerning preferred labellings (which may be less decisive than stable labellings but are always guaranteed to exist), namely why a given preferred labelling may not be stable and thus undecided on some arguments. In particular, (1) we give various characterisations of parts of an AF, based on the given preferred labelling, and (2) we show that these parts are indeed responsible for the undecisiveness if the preferred labelling is not stable. We then use these characterisations to explain the non-existence of stable labellings. We present two types of characterisations, based on labellings that are more (or equally) committed than the given preferred labelling on the one hand, and based on the structure of the given AF on the other, and compare the respective AF parts deemed responsible. To prove that our characterisations indeed yield responsible parts, we use a notion of enforcement of labels through structural revision, by means of which the preferred labelling of the given AF can be turned into a stable labelling of the structurally revised AF. Rather than prescribing how this structural revision is carried out, we focus on the enforcement of labels and leave the engineering of the revision open to fulfil differing requirements of applications and information available to users.

  • Conference paper
    Sæmundsson S, Hofmann K, Deisenroth MP, 2018,

    Meta reinforcement learning with latent variable Gaussian processes

    , Uncertainty in Artificial Intelligence (UAI) 2018, Publisher: Association for Uncertainty in Artificial Intelligence (AUAI)

    Learning from small data sets is critical inmany practical applications where data col-lection is time consuming or expensive, e.g.,robotics, animal experiments or drug design.Meta learning is one way to increase the dataefficiency of learning algorithms by general-izing learned concepts from a set of trainingtasks to unseen, but related, tasks. Often, thisrelationship between tasks is hard coded or re-lies in some other way on human expertise.In this paper, we frame meta learning as a hi-erarchical latent variable model and infer therelationship between tasks automatically fromdata. We apply our framework in a model-based reinforcement learning setting and showthat our meta-learning model effectively gen-eralizes to novel tasks by identifying how newtasks relate to prior ones from minimal data.This results in up to a60%reduction in theaverage interaction time needed to solve taskscompared to strong baselines.

  • Conference paper
    Saputra RP, Kormushev P, 2018,

    Casualty detection for mobile rescue robots via ground-projected point clouds

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer, Cham, Pages: 473-475, ISSN: 0302-9743

    In order to operate autonomously, mobile rescue robots needto be able to detect human casualties in disaster situations. In this paper,we propose a novel method for autonomous detection of casualties lyingdown on the ground based on point-cloud data. This data can be obtainedfrom different sensors, such as an RGB-D camera or a 3D LIDAR sensor.The method is based on a ground-projected point-cloud (GPPC) imageto achieve human body shape detection. A preliminary experiment hasbeen conducted using the RANSAC method for floor detection and, theHOG feature and the SVM classifier to detect human body shape. Theresults show that the proposed method succeeds to identify a casualtyfrom point-cloud data in a wide range of viewing angles.

  • Conference paper
    Alrajeh D, Russo A, 2018,

    Logic-based learning: theory and application

    , International Dagstuhl Seminar 16172, Publisher: Springer, Pages: 219-256, ISSN: 0302-9743

    In recent years, research efforts have been directed towards the use of Machine Learning (ML) techniques to support and automate activities such as specification mining, risk assessment, program analysis, and program repair. The focus has largely been on the use of machine learning black box methods whose inference mechanisms are not easily interpretable and whose outputs are not declarative and guaranteed to be correct. Hence, they cannot readily be used to inform the elaboration and revision of declarative software models identified to be incorrect or incomplete. On the other hand, recent advances in ML have witnessed the emergence of new logic-based machine learning approaches that overcome such limitations and which have been proven to be well-suited for many software engineering tasks. In this chapter, we present a survey of the state-of-the-art of logic-based machine learning techniques, highlight their expressivity, define their different underlying semantics, and discuss their efficiency and the heuristics they adopt to guide the search for solutions. We then demonstrate the application of this type of machine learning to (declarative) specification refinement and revision as a complementary task to program analysis.

  • Conference paper
    Cocarascu O, Cyras K, Toni F, 2018,

    Explanatory predictions with artificial neural networks and argumentation

    , Workshop on Explainable Artificial Intelligence (XAI)

    Data-centric AI has proven successful in severaldomains, but its outputs are often hard to explain.We present an architecture combining ArtificialNeural Networks (ANNs) for feature selection andan instance of Abstract Argumentation (AA) forreasoning to provide effective predictions, explain-able both dialectically and logically. In particular,we train an autoencoder to rank features in input ex-amples, and select highest-ranked features to gen-erate an AA framework that can be used for mak-ing and explaining predictions as well as mappedonto logical rules, which can equivalently be usedfor making predictions and for explaining.Weshow empirically that our method significantly out-performs ANNs and a decision-tree-based methodfrom which logical rules can also be extracted.

  • Conference paper
    Rago A, Cocarascu O, Toni F, 2018,

    Argumentation-based recommendations: fantastic explanations and how to find them

    , The Twenty-Seventh International Joint Conference on Artificial Intelligence, (IJCAI 2018), Pages: 1949-1955

    A significant problem of recommender systems is their inability to explain recommendations, resulting in turn in ineffective feedback from users and the inability to adapt to users’ preferences. We propose a hybrid method for calculating predicted ratings, built upon an item/aspect-based graph with users’ partially given ratings, that can be naturally used to provide explanations for recommendations, extracted from user-tailored Tripolar Argumentation Frameworks (TFs). We show that our method can be understood as a gradual semantics for TFs, exhibiting a desirable, albeit weak, property of balance. We also show experimentally that our method is competitive in generating correct predictions, compared with state-of-the-art methods, and illustrate how users can interact with the generated explanations to improve quality of recommendations.

  • Conference paper
    Olofsson S, Deisenroth M, Misener R, 2018,

    Design of experiments for model discrimination hybridising analytical and data-driven approaches

    , 35th International Conference on Machine Learning (ICML), Publisher: ICML

    Healthcare companies must submit pharmaceuti-cal drugs or medical devices to regulatory bodiesbefore marketing new technology. Regulatorybodies frequently require transparent and inter-pretable computational modelling to justify a newhealthcare technology, but researchers may haveseveral competing models for a biological sys-tem and too little data to discriminate betweenthe models. In design of experiments for modeldiscrimination, the goal is to design maximallyinformative physical experiments in order to dis-criminate between rival predictive models. Priorwork has focused either on analytical approaches,which cannot manage all functions, or on data-driven approaches, which may have computa-tional difficulties or lack interpretable marginalpredictive distributions. We develop a method-ology introducing Gaussian process surrogatesin lieu of the original mechanistic models. Wethereby extend existing design and model discrim-ination methods developed for analytical modelsto cases of non-analytical models in a computa-tionally efficient manner.

  • Conference paper
    Pardo F, Tavakoli A, Levdik V, Kormushev Pet al., 2018,

    Time limits in reinforcement learning

    , International Conference on Machine Learning, Pages: 4042-4051

    In reinforcement learning, it is common to let anagent interact for a fixed amount of time with itsenvironment before resetting it and repeating theprocess in a series of episodes. The task that theagent has to learn can either be to maximize itsperformance over (i) that fixed period, or (ii) anindefinite period where time limits are only usedduring training to diversify experience. In thispaper, we provide a formal account for how timelimits could effectively be handled in each of thetwo cases and explain why not doing so can causestate-aliasing and invalidation of experience re-play, leading to suboptimal policies and traininginstability. In case (i), we argue that the termi-nations due to time limits are in fact part of theenvironment, and thus a notion of the remainingtime should be included as part of the agent’s in-put to avoid violation of the Markov property. Incase (ii), the time limits are not part of the envi-ronment and are only used to facilitate learning.We argue that this insight should be incorporatedby bootstrapping from the value of the state atthe end of each partial episode. For both cases,we illustrate empirically the significance of ourconsiderations in improving the performance andstability of existing reinforcement learning algo-rithms, showing state-of-the-art results on severalcontrol tasks.

  • Conference paper
    Altuncu MT, Mayer E, Yaliraki SN, Barahona Met al., 2018,

    From Text to Topics in Healthcare Records: An Unsupervised Graph Partitioning Methodology

    , 2018 KDD Conference Proceedings - MLMH: Machine Learning for Medicine and Healthcare

    Electronic Healthcare Records contain large volumes of unstructured data,including extensive free text. Yet this source of detailed information oftenremains under-used because of a lack of methodologies to extract interpretablecontent in a timely manner. Here we apply network-theoretical tools to analysefree text in Hospital Patient Incident reports from the National HealthService, to find clusters of documents with similar content in an unsupervisedmanner at different levels of resolution. We combine deep neural networkparagraph vector text-embedding with multiscale Markov Stability communitydetection applied to a sparsified similarity graph of document vectors, andshowcase the approach on incident reports from Imperial College Healthcare NHSTrust, London. The multiscale community structure reveals different levels ofmeaning in the topics of the dataset, as shown by descriptive terms extractedfrom the clusters of records. We also compare a posteriori against hand-codedcategories assigned by healthcare personnel, and show that our approachoutperforms LDA-based models. Our content clusters exhibit good correspondencewith two levels of hand-coded categories, yet they also provide further medicaldetail in certain areas and reveal complementary descriptors of incidentsbeyond the external classification taxonomy.

  • Journal article
    Muggleton S, Dai WZ, Sammut C, Tamaddoni-Nezhad A, Wen J, Zhou ZHet al., 2018,

    Meta-Interpretive Learning from noisy images

    , Machine Learning, Vol: 107, Pages: 1097-1118, ISSN: 0885-6125

    Statistical machine learning is widely used in image classification. However, most techniques (1) require many images to achieve high accuracy and (2) do not provide support for reasoning below the level of classification, and so are unable to support secondary reasoning, such as the existence and position of light sources and other objects outside the image. This paper describes an Inductive Logic Programming approach called Logical Vision which overcomes some of these limitations. LV uses Meta-Interpretive Learning (MIL) combined with low-level extraction of high-contrast points sampled from the image to learn recursive logic programs describing the image. In published work LV was demonstrated capable of high-accuracy prediction of classes such as regular polygon from small numbers of images where Support Vector Machines and Convolutional Neural Networks gave near random predictions in some cases. LV has so far only been applied to noise-free, artificially generated images. This paper extends LV by (a) addressing classification noise using a new noise-telerant version of the MIL system Metagol, (b) addressing attribute noise using primitive-level statistical estimators to identify sub-objects in real images, (c) using a wider class of background models representing classical 2D shapes such as circles and ellipses, (d) providing richer learnable background knowledge in the form of a simple but generic recursive theory of light reflection. In our experiments we consider noisy images in both natural science settings and in a RoboCup competition setting. The natural science settings involve identification of the position of the light source in telescopic and microscopic images, while the RoboCup setting involves identification of the position of the ball. Our results indicate that with real images the new noise-robust version of LV using a single example (i.e. one-shot LV) converges to an accuracy at least comparable to a thirty-shot statistical machine learner on bot

  • Journal article
    Olofsson S, Deisenroth MP, Misener R, 2018,

    Design of Experiments for Model Discrimination using Gaussian Process Surrogate Models

    , Computer Aided Chemical Engineering, Vol: 44, Pages: 847-852, ISSN: 1570-7946

    © 2018 Elsevier B.V. Given rival mathematical models and an initial experimental data set, optimal design of experiments for model discrimination discards inaccurate models. Model discrimination is fundamentally about finding out how systems work. Not knowing how a particular system works, or having several rivalling models to predict the behaviour of the system, makes controlling and optimising the system more difficult. The most common way to perform model discrimination is by maximising the pairwise squared difference between model predictions, weighted by measurement noise and model uncertainty resulting from uncertainty in the fitted model parameters. The model uncertainty for analytical model functions is computed using gradient information. We develop a novel method where we replace the black-box models with Gaussian process surrogate models. Using the surrogate models, we are able to approximately marginalise out the model parameters, yielding the model uncertainty. Results show the surrogate model method working for model discrimination for classical test instances.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: a novel bipedal walking robot without knees

    , Towards Autonomous Robotic Systems (TAROS) 2018, Publisher: Springer International Publishing AG, part of Springer Nature, Pages: 471-472, ISSN: 0302-9743

    In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=989&limit=20&page=2&respub-action=search.html Current Millis: 1574062969489 Current Time: Mon Nov 18 07:42:49 GMT 2019