Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Tiwari K, Honore V, Jeong S, Chong NY, Deisenroth MPet al., 2017,

    Resource-constrained decentralized active sensing for multi-robot systems using distributed Gaussian processes

    , 2016 16th International Conference on Control, Automation and Systems (ICCAS), Publisher: IEEE, Pages: 13-18, ISSN: 1598-7833

    We consider the problem of area coverage for robot teams operating under resource constraints, while modeling spatio-temporal environmental phenomena. The aim of the mobile robot team is to avoid exhaustive search and only visit the most important locations that can improve the prediction accuracy of a spatio-temporal model. We use a Gaussian Process (GP) to model spatially varying and temporally evolving dynamics of the target phenomenon. Each robot of the team is allocated a dedicated search area wherein the robot autonomously optimizes its prediction accuracy. We present this as a Decentralized Computation and Centralized Data Fusion approach wherein the trajectory sampled by the robot is generated using our proposed Resource-Constrained Decentralized Active Sensing (RC-DAS). Since each robot possesses its own independent prediction model, at the end of robot's mission time, we fuse all the prediction models from all robots to have a global model of the spatio-temporal phenomenon. Previously, all robots and GPs needed to be synchronized, such that the GPs can be jointly trained. However, doing so defeats the purpose of a fully decentralized mobile robot team. Thus, we allow the robots to independently gather new measurements and update their model parameters irrespective of other members of the team. To evaluate the performance of our model, we compare the trajectory traced by the robot using active and passive (e.g., nearest neighbor selection) sensing. We compare the performance and cost incurred by a resource constrained optimization with the unconstrained entropy maximization version.

  • Journal article
    Zhang Q, Filippi S, Gretton A, Sejdinovic Det al., 2017,

    Large-Scale Kernel Methods for Independence Testing

    , Statistics and Computing, Vol: 28, Pages: 113-130, ISSN: 1573-1375

    Representations of probability measures in reproducing kernel Hilbert spacesprovide a flexible framework for fully nonparametric hypothesis tests ofindependence, which can capture any type of departure from independence,including nonlinear associations and multivariate interactions. However, theseapproaches come with an at least quadratic computational cost in the number ofobservations, which can be prohibitive in many applications. Arguably, it isexactly in such large-scale datasets that capturing any type of dependence isof interest, so striking a favourable tradeoff between computational efficiencyand test performance for kernel independence tests would have a direct impacton their applicability in practice. In this contribution, we provide anextensive study of the use of large-scale kernel approximations in the contextof independence testing, contrasting block-based, Nystrom and random Fourierfeature approaches. Through a variety of synthetic data experiments, it isdemonstrated that our novel large scale methods give comparable performancewith existing methods whilst using significantly less computation time andmemory.

  • Conference paper
    Chamberlain BP, Humby C, Deisenroth MP, 2017,

    Probabilistic Inference of Twitter Users' Age Based on What They Follow.

    , Publisher: Springer, Pages: 191-203
  • Conference paper
    Eleftheriadis S, Rudovic O, Deisenroth MP, Pantic Met al., 2016,

    Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units

    , 13th Asian Conference on Computer Vision (ACCV’16), Publisher: Springer, Pages: 154-170, ISSN: 0302-9743

    We address the task of simultaneous feature fusion and modelingof discrete ordinal outputs. We propose a novel Gaussian process(GP) auto-encoder modeling approach. In particular, we introduce GPencoders to project multiple observed features onto a latent space, whileGP decoders are responsible for reconstructing the original features. Inferenceis performed in a novel variational framework, where the recoveredlatent representations are further constrained by the ordinal outputlabels. In this way, we seamlessly integrate the ordinal structure in thelearned manifold, while attaining robust fusion of the input features.We demonstrate the representation abilities of our model on benchmarkdatasets from machine learning and affect analysis. We further evaluatethe model on the tasks of feature fusion and joint ordinal predictionof facial action units. Our experiments demonstrate the benefits of theproposed approach compared to the state of the art.

  • Conference paper
    Joulani P, Gyorgy A, Szepesvari C, 2016,

    A unified modular analysis of online and stochastic optimization: adaptivity, optimism, non-convexity

    , 9th NIPS Workshop on Optimization for Machine Learning

    We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow-the-Regularized-Leader (FTRL) algorithms for online and stochastic optimizationin (possibly infinite-dimensional) Hilbert spaces. The analysis is modular inthe sense that it completely decouples the effect of possible assumptions on theloss functions (such as smoothness, strong convexity, and non-convexity) andon the optimization regularizers (such as strong convexity, non-smooth penaltiesin composite-objective learning, and non-monotone step-size sequences). Wedemonstrate the power of this decoupling by obtaining generalized algorithms andimproved regret bounds for the so-called “adaptive optimistic online learning” set-ting. In addition, we simplify and extend a large body of previous work, includingseveral various AdaGrad formulations, composite-objective and implicit-updatealgorithms. In all cases, the results follow as simple corollaries within few linesof algebra. Finally, the decomposition enables us to obtain preliminary globalguarantees for limited classes of non-convex problems.

  • Conference paper
    Shaloudegi K, Gyorgy A, Szepesvari C, Xu Wet al., 2016,

    SDP relaxation with randomized rounding for energy disaggregation

    , The Thirtieth Annual Conference on Neural Information Processing Systems (NIPS), Publisher: Neutral Information Processing Systems Foundation, Inc.

    We develop a scalable, computationally efficient method for the task of energydisaggregation for home appliance monitoring. In this problem the goal is toestimate the energy consumption of each appliance over time based on the totalenergy-consumption signal of a household. The current state of the art is to modelthe problem as inference in factorial HMMs, and use quadratic programming tofind an approximate solution to the resulting quadratic integer program. Here wetake a more principled approach, better suited to integer programming problems,and find an approximate optimum by combining convex semidefinite relaxationsrandomized rounding, as well as a scalable ADMM method that exploits the specialstructure of the resulting semidefinite program. Simulation results both in syntheticand real-world datasets demonstrate the superiority of our method.

  • Conference paper
    Huang R, Lattimore T, Gyorgy A, Szepesvari Cet al., 2016,

    Following the Leader and Fast Rates in Linear Prediction: Curved Constraint Sets and Other Regularities

    , Advances in Neural Information Processing Systems 29 (NIPS 2016), Publisher: Neutral Information Processing Systems Foundation, Inc.

    The follow the leader (FTL) algorithm, perhaps the simplest of all online learningalgorithms, is known to perform well when the loss functions it is used on are positivelycurved. In this paper we ask whether there are other “lucky” settings whenFTL achieves sublinear, “small” regret. In particular, we study the fundamentalproblem of linear prediction over a non-empty convex, compact domain. Amongstother results, we prove that the curvature of the boundary of the domain can act asif the losses were curved: In this case, we prove that as long as the mean of the lossvectors have positive lengths bounded away from zero, FTL enjoys a logarithmicgrowth rate of regret, while, e.g., for polyhedral domains and stochastic data itenjoys finite expected regret. Building on a previously known meta-algorithm, wealso get an algorithm that simultaneously enjoys the worst-case guarantees and thebound available for FTL.

  • Journal article
    Filippi S, Holmes CC, Nieto-Barajas LE, 2016,

    Scalable Bayesian nonparametric measures for exploring pairwise dependence via Dirichlet Process Mixtures

    , Electronic Journal of Statistics, Vol: 10, Pages: 3338-3354, ISSN: 1935-7524

    In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a “null model” of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.

  • Conference paper
    Kern T, Gyorgy A, 2016,

    SVRG++ with non-uniform sampling

    , 9th NIPS Workshop on Optimization for Machine Learning, Publisher: Neural Information Processing Systems Foundation, Inc.

    SVRG++ is a recent randomized optimization algorithm designed to solve non-strongly convex smooth composite optimization problems in the large data regime.In this paper we combine SVRG++ with non-uniform sampling of the data points(already present in the original SVRG algorithm), leading to an algorithm with thebest sample complexity to date and state-of-the art empirical performance. Whilethe combination and the analysis of the algorithm is admittedly straightforward,our experimental results show significant improvement over the original SVRG++method with the new method outperforming all competitors on datasets where thesmoothness of the components varies. This demonstrates that, despite its simplicityand limited novelty, this extension is important in practice.

  • Conference paper
    Calandra R, Peters J, Rasmussen CE, Deisenroth MPet al., 2016,

    Manifold Gaussian Processes for Regression

    , International Joint Conference on Neural Networks, Publisher: IEEE, ISSN: 2161-4407

    Off-the-shelf Gaussian Process (GP) covariancefunctions encode smoothness assumptions on the structureof the function to be modeled. To model complex and nondifferentiablefunctions, these smoothness assumptions are oftentoo restrictive. One way to alleviate this limitation is to finda different representation of the data by introducing a featurespace. This feature space is often learned in an unsupervisedway, which might lead to data representations that are notuseful for the overall regression task. In this paper, we proposeManifold Gaussian Processes, a novel supervised method thatjointly learns a transformation of the data into a featurespace and a GP regression from the feature space to observedspace. The Manifold GP is a full GP and allows to learn datarepresentations, which are useful for the overall regressiontask. As a proof-of-concept, we evaluate our approach oncomplex non-smooth functions where standard GPs performpoorly, such as step functions and robotics tasks with contacts.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=8&respub-action=search.html Current Millis: 1642546937550 Current Time: Tue Jan 18 23:02:17 GMT 2022