Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • JOURNAL ARTICLE
    Liepe J, Kirk P, Filippi S, Toni T, Barnes CP, Stumpf MPHet al., 2014,

    A framework for parameter estimation and model selection from experimental data in systems biology using approximate Bayesian computation

    , NATURE PROTOCOLS, Vol: 9, Pages: 439-456, ISSN: 1754-2189
  • JOURNAL ARTICLE
    Filippi S, Barnes CP, Cornebise J, Stumpf MPHet al., 2013,

    On optimality of kernels for approximate Bayesian computation using sequential Monte Carlo

    , STATISTICAL APPLICATIONS IN GENETICS AND MOLECULAR BIOLOGY, Vol: 12, ISSN: 2194-6302
  • JOURNAL ARTICLE
    Silk D, Filippi S, Stumpf MPH, 2013,

    Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems

    , STATISTICAL APPLICATIONS IN GENETICS AND MOLECULAR BIOLOGY, Vol: 12, Pages: 603-618, ISSN: 2194-6302
  • JOURNAL ARTICLE
    Silk D, Filippi S, Stumpf MPH, 2013,

    Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems

    , Statistical Applications in Genetics and Molecular Biology, Vol: 12, Pages: 603-618, ISSN: 2194-6302

    The likelihood–free sequential Approximate Bayesian Computation (ABC) algorithms are increasingly popular inference tools for complex biological models. Such algorithms proceed by constructing a succession of probability distributions over the parameter space conditional upon the simulated data lying in an ε–ball around the observed data, for decreasing values of the threshold ε. While in theory, the distributions (starting from a suitably defined prior) will converge towards the unknown posterior as ε tends to zero, the exact sequence of thresholds can impact upon the computational efficiency and success of a particular application. In particular, we show here that the current preferred method of choosing thresholds as a pre-determined quantile of the distances between simulated and observed data from the previous population, can lead to the inferred posterior distribution being very different to the true posterior. Threshold selection thus remains an important challenge. Here we propose that the threshold–acceptance rate curve may be used to determine threshold schedules that avoid local optima, while balancing the need to minimise the threshold with computational efficiency. Furthermore, we provide an algorithm based upon the unscented transform, that enables the threshold–acceptance rate curve to be efficiently predicted in the case of deterministic and stochastic state space models.

  • JOURNAL ARTICLE
    Barnes C, Filippi S, Stumpf MPH, Thorne Tet al., 2012,

    Considerate approaches to achieving sufficiency for ABC model selection

    , Statistics and Computing, Vol: 22, Pages: 1181-1197, ISSN: 0960-3174

    For nearly any challenging scientific problemevaluation of the likelihood is problematic if not impossible.Approximate Bayesian computation (ABC) allowsus to employ the whole Bayesian formalism to problemswhere we can use simulations from a model, but cannotevaluate the likelihood directly. When summary statistics ofreal and simulated data are compared—rather than the datadirectly—information is lost, unless the summary statisticsare sufficient. Sufficient statistics are, however, not commonbut without them statistical inference in ABC inferencesare to be considered with caution. Previously other authorshave attempted to combine different statistics in order toconstruct (approximately) sufficient statistics using searchand information heuristics. Here we employ an informationtheoreticalframework that can be used to construct appropriate(approximately sufficient) statistics by combining differentstatistics until the loss of information is minimized.We start from a potentially large number of different statisticsand choose the smallest set that captures (nearly) thesame information as the complete set. We then demonstratethat such sets of statistics can be constructed for both parameterestimation and model selection problems, and we applyour approach to a range of illustrative and real-world modelselection problems.

  • JOURNAL ARTICLE
    Barnes CP, Filippi S, Stumpf MPH, Thorne Tet al., 2012,

    Considerate approaches to constructing summary statistics for ABC model selection

    , STATISTICS AND COMPUTING, Vol: 22, Pages: 1181-1197, ISSN: 0960-3174
  • JOURNAL ARTICLE
    Filippi S, Cappe O, Garivier A, 2011,

    Optimally Sensing a Single Channel Without Prior Information: The Tiling Algorithm and Regret Bounds

    , IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, Vol: 5, Pages: 68-76, ISSN: 1932-4553
  • CONFERENCE PAPER
    Filippi S, Cappe O, Garivier A, 2010,

    Optimism in Reinforcement Learning and Kullback-Leibler Divergence

    , ALLERTON 2010

    We consider model-based reinforcement learning in finite Markov De- cisionProcesses (MDPs), focussing on so-called optimistic strategies. In MDPs,optimism can be implemented by carrying out extended value it- erations under aconstraint of consistency with the estimated model tran- sition probabilities.The UCRL2 algorithm by Auer, Jaksch and Ortner (2009), which follows thisstrategy, has recently been shown to guarantee near-optimal regret bounds. Inthis paper, we strongly argue in favor of using the Kullback-Leibler (KL)divergence for this purpose. By studying the linear maximization problem underKL constraints, we provide an ef- ficient algorithm, termed KL-UCRL, forsolving KL-optimistic extended value iteration. Using recent deviation boundson the KL divergence, we prove that KL-UCRL provides the same guarantees asUCRL2 in terms of regret. However, numerical experiments on classicalbenchmarks show a significantly improved behavior, particularly when the MDPhas reduced connectivity. To support this observation, we provide elements ofcom- parison between the two algorithms based on geometric considerations.

  • CONFERENCE PAPER
    Filippi S, Cappe O, Garivier A, Szepesvari Cet al., 2010,

    Parametric bandits: The generalized linear case

    , Neural Information Processing Systems (NIPS’2010)
  • JOURNAL ARTICLE
    Creswell A, Bharath AA,

    Task Specific Adversarial Cost Function

    The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=954&limit=10&page=4&respub-action=search.html Current Millis: 1513497426027 Current Time: Sun Dec 17 07:57:06 GMT 2017