Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Book chapter
    Kormushev P, Ahmadzadeh SR, 2016,

    Robot Learning for Persistent Autonomy

    , Handling Uncertainty and Networked Structure in Robot Control, Editors: Busoniu, Tamás, Publisher: Springer International Publishing, Pages: 3-28, ISBN: 978-3-319-26327-4
  • Conference paper
    Maurelli F, Lane D, Kormushev P, Caldwell D, Carreras M, Salvi J, Fox M, Long D, Kyriakopoulos K, Karras Get al., 2016,

    The PANDORA project: a success story in AUV autonomy

    , OCEANS Conference 2016, Publisher: IEEE, ISSN: 0197-7385

    This paper presents some of the results of the EU-funded project PANDORA - Persistent Autonomy Through Learning Adaptation Observation and Re-planning. The project was three and a half years long and involved several organisations across Europe. The application domain is underwater inspection and intervention, a topic particularly interesting for the oil and gas sector, whose representatives constituted the Industrial Advisory Board. Field trials were performed at The Underwater Centre, in Loch Linnhe, Scotland, and in harbour conditions close to Girona, Spain.

  • Conference paper
    Gyorgy A, Szcpesvari C, 2016,

    Shifting regret, mirror descent, and matrices

    , Pages: 4324-4332

    © 2016 by the author(s). We consider the problem of online prediction in changing environments. In this framework the performance of a predictor is evaluated as the loss relative to an arbitrarily changing predictor, whose individual components come from a base class of predictors. Typical results in the literature consider different base classes (experts, linear predictors on the simplex, etc.) separately. Introducing an arbitrary mapping inside the mirror decent algorithm, we provide a framework that unifies and extends existing results. As an example, we prove new shifting regret bounds for matrix prediction problems.

  • Journal article
    Creswell A, Bharath AA, 2016,

    Task Specific Adversarial Cost Function

    The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.

  • Conference paper
    Pantic M, Evers V, Deisenroth M, Merino L, Schuller Bet al., 2016,

    Social and Affective Robotics Tutorial

    , 24th ACM Multimedia Conference (MM), Publisher: ASSOC COMPUTING MACHINERY, Pages: 1477-1478
  • Conference paper
    Eleftheriadis S, Rudovic O, Deisenroth MP, Pantic Met al., 2016,

    Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units.

    , Pages: 154-170
  • Journal article
    Filippi S, Barnes CP, Kirk PDW, Kudo T, Kunida K, McMahon SS, Tsuchiya T, Wada T, Kuroda S, Stumpf MPHet al., 2016,

    Robustness of MEK-ERK Dynamics and Origins of Cell-to-Cell Variability in MAPK Signaling

    , CellReports
  • Conference paper
    Hu X, Prashanth LA, Gyorgy A, Szepesvari Cet al., 2015,

    (Bandit) Convex Optimization with Biased Noisy Gradient Oracles

    , 8th NIPS Workshop on Optimization for Machine Learning

    For bandit convex optimization we propose a model, where a gradient estimation oracle acts as anintermediary between a noisy function evaluation oracle and the algorithms. The algorithms cancontrol the bias-variance tradeoff in the gradient estimates. We prove lower and upper bounds forthe minimax error of algorithms that interact with the objective function by controlling this oracle.The upper bounds replicate many existing results (capturing the essence of existing proofs) while thelower bounds put a limit on the achievable performance in this setup. In particular, our results implythat no algorithm can achieve the optimal minimax error rate in stochastic bandit smooth convexoptimization.

  • Conference paper
    Joulani P, Gyorgy A, Szepesvari C, 2015,

    Classification with Margin Constraints: A Unification with Applications to Optimization

    , 8th NIPS Workshop on Optimization for Machine Learning

    This paper introduces Classification with Margin Constraints (CMC), a simplegeneralization of cost-sensitive classification that unifies several learning settings.In particular, we show that a CMC classifier can be used, out of the box, to solveregression, quantile estimation, and several anomaly detection formulations. Onthe one hand, our reductions to CMC are at the loss level: the optimization problemto solve under the equivalent CMC setting is exactly the same as the optimizationproblem under the original (e.g. regression) setting. On the other hand,due to the close relationship between CMC and standard binary classification, theideas proposed for efficient optimization in binary classification naturally extendto CMC. As such, any improvement in CMC optimization immediately transfersto the domains reduced to CMC, without the need for new derivations or programs.To our knowledge, this unified view has been overlooked by the existingpractice in the literature, where an optimization technique (such as SMO or PEGASOS)is first developed for binary classification and then extended to otherproblem domains on a case-by-case basis. We demonstrate the flexibility of CMCby reducing two recent anomaly detection and quantile learning methods to CMC.

  • Conference paper
    Wu Y, György A, Szepesvari C, 2015,

    Online Learning with Gaussian Payoffs and Side Observations

    , 29th Annual Conference on Neural Information Processing Systems (NIPS), Publisher: Neural Information Processing Systems Foundation, Inc.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=8&respub-action=search.html Current Millis: 1606586385417 Current Time: Sat Nov 28 17:59:45 GMT 2020