Imperial College London

Dr Nikolas Kantas

Faculty of Natural SciencesDepartment of Mathematics

Reader in Statistics
 
 
 
//

Contact

 

+44 (0)20 7594 2772n.kantas Website

 
 
//

Location

 

538Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Beskos:2016:10.1214/15-AAP1113,
author = {Beskos, A and Jasra, A and Kantas, N and Thiery, A},
doi = {10.1214/15-AAP1113},
journal = {Annals of Applied Probability},
pages = {1111--1146},
title = {On the convergence of adaptive sequential Monte Carlo methods},
url = {http://dx.doi.org/10.1214/15-AAP1113},
volume = {26},
year = {2016}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - In several implementations of Sequential Monte Carlo (SMC) methods it is natural and important, in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such adaptive SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms [Chopin, Biometrika 89 (2002) 539–551; Jasra et al., Scand. J. Stat. 38 (2011) 1–22; Schäfer and Chopin, Stat. Comput. 23 (2013) 163–184]. There are only limited results about the theoretical underpinning of such adaptive methods: we will bridge this gap by providing a weak law of large numbers (WLLN) and a central limit theorem (CLT) for some of these algorithms. The latter seems to be the first result of its kind in the literature and provides a formal justification of algorithms used in many real data contexts [Jasra et al. (2011); Schäfer and Chopin (2013)]. We establish that for a general class of adaptive SMC algorithms [Chopin (2002)], the asymptotic variance of the estimators from the adaptive SMC method is identical to a “limiting” SMC algorithm which uses ideal proposal kernels. Our results are supported by application on a complex high-dimensional posterior distribution associated with the Navier–Stokes model, where adapting high-dimensional parameters of the proposal kernels is critical for the efficiency of the algorithm.
AU - Beskos,A
AU - Jasra,A
AU - Kantas,N
AU - Thiery,A
DO - 10.1214/15-AAP1113
EP - 1146
PY - 2016///
SN - 1050-5164
SP - 1111
TI - On the convergence of adaptive sequential Monte Carlo methods
T2 - Annals of Applied Probability
UR - http://dx.doi.org/10.1214/15-AAP1113
UR - http://hdl.handle.net/10044/1/38719
VL - 26
ER -