- Showing results for:
- Reset all filters
Book chapterKormushev P, Ahmadzadeh SR, 2016,
Book chapterAhmadzadeh SR, Kormushev P, 2016,
Conference paperMaurelli F, Lane D, Kormushev P, et al., 2016,
This paper presents some of the results of the EU-funded project PANDORA - Persistent Autonomy Through Learning Adaptation Observation and Re-planning. The project was three and a half years long and involved several organisations across Europe. The application domain is underwater inspection and intervention, a topic particularly interesting for the oil and gas sector, whose representatives constituted the Industrial Advisory Board. Field trials were performed at The Underwater Centre, in Loch Linnhe, Scotland, and in harbour conditions close to Girona, Spain.
Conference paperPantic M, Evers V, Deisenroth M, et al., 2016,
Conference paperGyorgy A, Szcpesvari C, 2016,
Shifting regret, mirror descent, and matrices, Pages: 4324-4332
We consider the problem of online prediction in changing environments. In this framework the performance of a predictor is evaluated as the loss relative to an arbitrarily changing predictor, whose individual components come from a base class of predictors. Typical results in the literature consider different base classes (experts, linear predictors on the simplex, etc.) separately. Introducing an arbitrary mapping inside the mirror decent algorithm, we provide a framework that unifies and extends existing results. As an example, we prove new shifting regret bounds for matrix prediction problems.
Journal articleCreswell A, Bharath AA, 2016,
Task Specific Adversarial Cost Function
The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.
Journal articleFilippi S, Barnes CP, Kirk PDW, et al., 2016,
Robustness of MEK-ERK Dynamics and Origins of Cell-to-Cell Variability in MAPK Signaling, CellReports
Conference paperEleftheriadis S, Rudovic O, Deisenroth MP, et al., 2016,
Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units., Pages: 154-170
Conference paperJoulani P, Gyorgy A, Szepesvari C, 2015,
Classification with Margin Constraints: A Unification with Applications to Optimization, 8th NIPS Workshop on Optimization for Machine Learning
This paper introduces Classification with Margin Constraints (CMC), a simplegeneralization of cost-sensitive classification that unifies several learning settings.In particular, we show that a CMC classifier can be used, out of the box, to solveregression, quantile estimation, and several anomaly detection formulations. Onthe one hand, our reductions to CMC are at the loss level: the optimization problemto solve under the equivalent CMC setting is exactly the same as the optimizationproblem under the original (e.g. regression) setting. On the other hand,due to the close relationship between CMC and standard binary classification, theideas proposed for efficient optimization in binary classification naturally extendto CMC. As such, any improvement in CMC optimization immediately transfersto the domains reduced to CMC, without the need for new derivations or programs.To our knowledge, this unified view has been overlooked by the existingpractice in the literature, where an optimization technique (such as SMO or PEGASOS)is first developed for binary classification and then extended to otherproblem domains on a case-by-case basis. We demonstrate the flexibility of CMCby reducing two recent anomaly detection and quantile learning methods to CMC.
Conference paperHu X, Prashanth LA, Gyorgy A, et al., 2015,
(Bandit) Convex Optimization with Biased Noisy Gradient Oracles, 8th NIPS Workshop on Optimization for Machine Learning
For bandit convex optimization we propose a model, where a gradient estimation oracle acts as anintermediary between a noisy function evaluation oracle and the algorithms. The algorithms cancontrol the bias-variance tradeoff in the gradient estimates. We prove lower and upper bounds forthe minimax error of algorithms that interact with the objective function by controlling this oracle.The upper bounds replicate many existing results (capturing the essence of existing proofs) while thelower bounds put a limit on the achievable performance in this setup. In particular, our results implythat no algorithm can achieve the optimal minimax error rate in stochastic bandit smooth convexoptimization.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.