Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • CONFERENCE PAPER
    Flaxman S, Sejdinovic D, Cunningham JP, Filippi Set al., 2016,

    Bayesian Learning of Kernel Embeddings

    , UAI'16

    Kernel methods are one of the mainstays of machine learning, but the problemof kernel learning remains challenging, with only a few heuristics and verylittle theory. This is of particular importance in methods based on estimationof kernel mean embeddings of probability measures. For characteristic kernels,which include most commonly used ones, the kernel mean embedding uniquelydetermines its probability measure, so it can be used to design a powerfulstatistical testing framework, which includes nonparametric two-sample andindependence tests. In practice, however, the performance of these tests can bevery sensitive to the choice of kernel and its lengthscale parameters. Toaddress this central issue, we propose a new probabilistic model for kernelmean embeddings, the Bayesian Kernel Embedding model, combining a Gaussianprocess prior over the Reproducing Kernel Hilbert Space containing the meanembedding with a conjugate likelihood function, thus yielding a closed formposterior over the mean embedding. The posterior mean of our model is closelyrelated to recently proposed shrinkage estimators for kernel mean embeddings,while the posterior uncertainty is a new, interesting feature with variouspossible applications. Critically for the purposes of kernel learning, ourmodel gives a simple, closed form marginal pseudolikelihood of the observeddata given the kernel hyperparameters. This marginal pseudolikelihood caneither be optimized to inform the hyperparameter choice or fully Bayesianinference can be used.

  • CONFERENCE PAPER
    Gyorgy A, Szcpesvari C, 2016,

    Shifting regret, mirror descent, and matrices

    , Pages: 4324-4332

    © 2016 by the author(s). We consider the problem of online prediction in changing environments. In this framework the performance of a predictor is evaluated as the loss relative to an arbitrarily changing predictor, whose individual components come from a base class of predictors. Typical results in the literature consider different base classes (experts, linear predictors on the simplex, etc.) separately. Introducing an arbitrary mapping inside the mirror decent algorithm, we provide a framework that unifies and extends existing results. As an example, we prove new shifting regret bounds for matrix prediction problems.

  • CONFERENCE PAPER
    Gyorgy A, Szepesvari C, 2016,

    Shifting Regret, Mirror Descent, and Matrices

    , International Conference on Machine Learning, Publisher: Journal of Machine Learning Research, Pages: 2943-2951, ISSN: 1532-4435

    We consider the problem of online prediction inchanging environments. In this framework theperformance of a predictor is evaluated as theloss relative to an arbitrarily changing predictor,whose individual components come from a baseclass of predictors. Typical results in the literatureconsider different base classes (experts, linearpredictors on the simplex, etc.) separately.Introducing an arbitrary mapping inside the mirrordecent algorithm, we provide a frameworkthat unifies and extends existing results. As anexample, we prove new shifting regret bounds formatrix prediction problems.

  • CONFERENCE PAPER
    Huang R, Lattimore T, Gyorgy A, Szepesvari Cet al., 2016,

    Following the Leader and Fast Rates in Linear Prediction: Curved Constraint Sets and Other Regularities

    , Advances in Neural Information Processing Systems 29 (NIPS 2016), Publisher: Neutral Information Processing Systems Foundation, Inc.

    The follow the leader (FTL) algorithm, perhaps the simplest of all online learningalgorithms, is known to perform well when the loss functions it is used on are positivelycurved. In this paper we ask whether there are other “lucky” settings whenFTL achieves sublinear, “small” regret. In particular, we study the fundamentalproblem of linear prediction over a non-empty convex, compact domain. Amongstother results, we prove that the curvature of the boundary of the domain can act asif the losses were curved: In this case, we prove that as long as the mean of the lossvectors have positive lengths bounded away from zero, FTL enjoys a logarithmicgrowth rate of regret, while, e.g., for polyhedral domains and stochastic data itenjoys finite expected regret. Building on a previously known meta-algorithm, wealso get an algorithm that simultaneously enjoys the worst-case guarantees and thebound available for FTL.

  • CONFERENCE PAPER
    Joulani P, Gyorgy A, Szepesvari C, 2016,

    Delay-Tolerant Online Convex Optimization: Unified Analysis and Adaptive-Gradient Algorithms

    , Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), Publisher: AAAI

    We present a unified, black-box-style method for developingand analyzing online convex optimization (OCO) algorithmsfor full-information online learning in delayed-feedback environments.Our new, simplified analysis enables us to substantiallyimprove upon previous work and to solve a numberof open problems from the literature. Specifically, we developand analyze asynchronous AdaGrad-style algorithmsfrom the Follow-the-Regularized-Leader (FTRL) and MirrorDescentfamily that, unlike previous works, can handle projectionsand adapt both to the gradients and the delays, withoutrelying on either strong convexity or smoothness of theobjective function, or data sparsity. Our unified frameworkbuilds on a natural reduction from delayed-feedback to standard(non-delayed) online learning. This reduction, togetherwith recent unification results for OCO algorithms, allows usto analyze the regret of generic FTRL and Mirror-Descent algorithmsin the delayed-feedback setting in a unified mannerusing standard proof techniques. In addition, the reduction isexact and can be used to obtain both upper and lower boundson the regret in the delayed-feedback setting.

  • CONFERENCE PAPER
    Joulani P, Gyorgy A, Szepesvari C, 2016,

    A unified modular analysis of online and stochastic optimization: adaptivity, optimism, non-convexity

    , 9th NIPS Workshop on Optimization for Machine Learning

    We present a simple unified analysis of adaptive Mirror Descent (MD) and Follow-the-Regularized-Leader (FTRL) algorithms for online and stochastic optimizationin (possibly infinite-dimensional) Hilbert spaces. The analysis is modular inthe sense that it completely decouples the effect of possible assumptions on theloss functions (such as smoothness, strong convexity, and non-convexity) andon the optimization regularizers (such as strong convexity, non-smooth penaltiesin composite-objective learning, and non-monotone step-size sequences). Wedemonstrate the power of this decoupling by obtaining generalized algorithms andimproved regret bounds for the so-called “adaptive optimistic online learning” set-ting. In addition, we simplify and extend a large body of previous work, includingseveral various AdaGrad formulations, composite-objective and implicit-updatealgorithms. In all cases, the results follow as simple corollaries within few linesof algebra. Finally, the decomposition enables us to obtain preliminary globalguarantees for limited classes of non-convex problems.

  • CONFERENCE PAPER
    Kern T, Gyorgy A, 2016,

    SVRG++ with non-uniform sampling

    , 9th NIPS Workshop on Optimization for Machine Learning, Publisher: Neural Information Processing Systems Foundation, Inc.

    SVRG++ is a recent randomized optimization algorithm designed to solve non-strongly convex smooth composite optimization problems in the large data regime.In this paper we combine SVRG++ with non-uniform sampling of the data points(already present in the original SVRG algorithm), leading to an algorithm with thebest sample complexity to date and state-of-the art empirical performance. Whilethe combination and the analysis of the algorithm is admittedly straightforward,our experimental results show significant improvement over the original SVRG++method with the new method outperforming all competitors on datasets where thesmoothness of the components varies. This demonstrates that, despite its simplicityand limited novelty, this extension is important in practice.

  • CONFERENCE PAPER
    Kurek M, Deisenroth MP, Luk W, Todman Tet al., 2016,

    Knowledge Transfer in Automatic Optimisation of Reconfigurable Designs

    , 24th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM), Publisher: IEEE, Pages: 84-87
  • JOURNAL ARTICLE
    Ma Z-B, Yang Y, Liu Y-X, Bharath AAet al., 2016,

    Recurrently Decomposable 2-D Convolvers for FPGA-Based Digital Image Processing

    , IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, Vol: 63, Pages: 979-983, ISSN: 1549-7747
  • CONFERENCE PAPER
    Pantic M, Evers V, Deisenroth M, Merino L, Schuller Bet al., 2016,

    Social and Affective Robotics Tutorial

    , 24th ACM Multimedia Conference (MM), Publisher: ASSOC COMPUTING MACHINERY, Pages: 1477-1478

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=3&respub-action=search.html Current Millis: 1516347545745 Current Time: Fri Jan 19 07:39:05 GMT 2018