Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Book
    Deisenroth MP, Faisal AA, Ong CS, 2020,

    Mathematics for Machine Learning

    , Publisher: Cambridge University Press, ISBN: 9781108455145
  • Journal article
    Moriconi R, Kumar KSS, Deisenroth MP,

    High-dimensional Bayesian optimization with projections using quantile Gaussian processes

    , Optimization Letters, ISSN: 1862-4472
  • Journal article
    Creswell A, Bharath AA, 2019,

    Denoising adversarial autoencoders

    , IEEE Transactions on Neural Networks and Learning Systems, Vol: 30, Pages: 968-984, ISSN: 2162-2388

    Unsupervised learning is of growing interest becauseit unlocks the potential held in vast amounts of unlabelled data tolearn useful representations for inference. Autoencoders, a formof generative model, may be trained by learning to reconstructunlabelled input data from a latent representation space. Morerobust representations may be produced by an autoencoderif it learns to recover clean input samples from corruptedones. Representations may be further improved by introducingregularisation during training to shape the distribution of theencoded data in the latent space. We suggestdenoising adversarialautoencoders, which combine denoising and regularisation, shap-ing the distribution of latent space using adversarial training.We introduce a novel analysis that shows how denoising maybe incorporated into the training and sampling of adversarialautoencoders. Experiments are performed to assess the contri-butions that denoising makes to the learning of representationsfor classification and sample synthesis. Our results suggest thatautoencoders trained using a denoising criterion achieve higherclassification performance, and can synthesise samples that aremore consistent with the input data than those trained withouta corruption process.

  • Journal article
    Bertone G, Deisenroth MP, Kim JS, Liem S, de Austri RR, Welling Met al., 2019,

    Accelerating the BSM interpretation of LHC data with machine learning

    , PHYSICS OF THE DARK UNIVERSE, Vol: 24, ISSN: 2212-6864
  • Journal article
    Kormushev P, Ugurlu B, Caldwell DG, Tsagarakis NGet al., 2019,

    Learning to exploit passive compliance for energy-efficient gait generation on a compliant humanoid

    , Autonomous Robots, Vol: 43, Pages: 79-95, ISSN: 1573-7527

    Modern humanoid robots include not only active compliance but also passive compliance. Apart from improved safety and dependability, availability of passive elements, such as springs, opens up new possibilities for improving the energy efficiency. With this in mind, this paper addresses the challenging open problem of exploiting the passive compliance for the purpose of energy efficient humanoid walking. To this end, we develop a method comprising two parts: an optimization part that finds an optimal vertical center-of-mass trajectory, and a walking pattern generator part that uses this trajectory to produce a dynamically-balanced gait. For the optimization part, we propose a reinforcement learning approach that dynamically evolves the policy parametrization during the learning process. By gradually increasing the representational power of the policy parametrization, it manages to find better policies in a faster and computationally efficient way. For the walking generator part, we develop a variable-center-of-mass-height ZMP-based bipedal walking pattern generator. The method is tested in real-world experiments with the bipedal robot COMAN and achieves a significant 18% reduction in the electric energy consumption by learning to efficiently use the passive compliance of the robot.

  • Conference paper
    Dutordoir V, Salimbeni HR, Hensman J, Deisenroth MPet al., 2018,

    Gaussian process conditional density estimation

    , Advances in Neural Information Processing Systems, Publisher: Neural Information Processing Systems Conference

    Conditional Density Estimation (CDE) models deal with estimating conditional distributions. The conditions imposed on the distribution are the inputs of the model. CDE is a challenging task as there is a fundamental trade-off between model complexity, representational capacity and overfitting. In this work, we propose to extend the model's input with latent variables and use Gaussian processes (GP) to map this augmented input onto samples from the conditional distribution. Our Bayesian approach allows for the modeling of small datasets, but we also provide the machinery for it to be applied to big data using stochastic variational inference. Our approach can be used to model densities even in sparse data regions, and allows for sharing learned structure between conditions. We illustrate the effectiveness and wide-reaching applicability of our model on a variety of real-world problems, such as spatio-temporal density estimation of taxi drop-offs, non-Gaussian noise modeling, and few-shot learning on omniglot images.

  • Conference paper
    Wang K, Shah A, Kormushev P, 2018,

    SLIDER: A Bipedal Robot with Knee-less Legs and Vertical Hip Sliding Motion

    , 21st International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR 2018)
  • Conference paper
    Salimbeni HR, Cheng C-A, Boots B, Deisenroth MPet al.,

    Orthogonally decoupled variational Gaussian processes

    , Advances in Neural Information Processing Systems (NIPS) 2018, Publisher: Massachusetts Institute of Technology Press, ISSN: 1049-5258

    Gaussian processes (GPs) provide a powerful non-parametric framework for rea-soning over functions. Despite appealing theory, its superlinear computational andmemory complexities have presented a long-standing challenge. State-of-the-artsparse variational inference methods trade modeling accuracy against complexity.However, the complexities of these methods still scale superlinearly in the numberof basis functions, implying that that sparse GP methods are able to learn fromlarge datasets only when a small model is used. Recently, a decoupled approachwas proposed that removes the unnecessary coupling between the complexitiesof modeling the mean and the covariance functions of a GP. It achieves a linearcomplexity in the number of mean parameters, so an expressive posterior meanfunction can be modeled. While promising, this approach suffers from optimizationdifficulties due to ill-conditioning and non-convexity. In this work, we propose analternative decoupled parametrization. It adopts an orthogonal basis in the meanfunction to model the residues that cannot be learned by the standard coupled ap-proach. Therefore, our method extends, rather than replaces, the coupled approachto achieve strictly better performance. This construction admits a straightforwardnatural gradient update rule, so the structure of the information manifold that islost during decoupling can be leveraged to speed up learning. Empirically, ouralgorithm demonstrates significantly faster convergence in multiple experiments.

  • Conference paper
    Wilson J, Hutter F, Deisenroth MP,

    Maximizing acquisition functions for Bayesian optimization

    , Advances in Neural Information Processing Systems (NIPS) 2018, Publisher: Massachusetts Institute of Technology Press, ISSN: 1049-5258

    Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose characteristics not only facilitate but justify use of greedy approaches for their maximization.

  • Conference paper
    Sæmundsson S, Hofmann K, Deisenroth MP, 2018,

    Meta reinforcement learning with latent variable Gaussian processes

    , Uncertainty in Artificial Intelligence (UAI) 2018, Publisher: Association for Uncertainty in Artificial Intelligence (AUAI)

    Learning from small data sets is critical inmany practical applications where data col-lection is time consuming or expensive, e.g.,robotics, animal experiments or drug design.Meta learning is one way to increase the dataefficiency of learning algorithms by general-izing learned concepts from a set of trainingtasks to unseen, but related, tasks. Often, thisrelationship between tasks is hard coded or re-lies in some other way on human expertise.In this paper, we frame meta learning as a hi-erarchical latent variable model and infer therelationship between tasks automatically fromdata. We apply our framework in a model-based reinforcement learning setting and showthat our meta-learning model effectively gen-eralizes to novel tasks by identifying how newtasks relate to prior ones from minimal data.This results in up to a60%reduction in theaverage interaction time needed to solve taskscompared to strong baselines.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&respub-action=search.html Current Millis: 1574453147742 Current Time: Fri Nov 22 20:05:47 GMT 2019