Imperial College London

Dr Nikolas Kantas

Faculty of Natural SciencesDepartment of Mathematics

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 2772n.kantas Website

 
 
//

Location

 

538Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

23 results found

Lu X, Adams N, Kantas N, 2019, On adaptive estimation for dynamic Bernoulli bandits, Foundations of Data Science, Vol: 1, Pages: 197-225, ISSN: 2639-8001

The multi-armed bandit (MAB) problem is a classic example of theexploration-exploitation dilemma. It is concerned with maximising the totalrewards for a gambler by sequentially pulling an arm from a multi-armed slotmachine where each arm is associated with a reward distribution. In staticMABs, the reward distributions do not change over time, while in dynamic MABs,each arm's reward distribution can change, and the optimal arm can switch overtime. Motivated by many real applications where rewards are binary, we focus ondynamic Bernoulli bandits. Standard methods like $\epsilon$-Greedy and UpperConfidence Bound (UCB), which rely on the sample mean estimator, often fail totrack changes in the underlying reward for dynamic problems. In this paper, weovercome the shortcoming of slow response to change by deploying adaptiveestimation in the standard methods and propose a new family of algorithms,which are adaptive versions of $\epsilon$-Greedy, UCB, and Thompson sampling.These new methods are simple and easy to implement. Moreover, they do notrequire any prior knowledge about the dynamic reward process, which isimportant for real applications. We examine the new algorithms numerically indifferent scenarios and the results show solid improvements of our algorithmsin dynamic environments.

Journal article

Llopis FP, Kantas N, Beskos A, Jasra Aet al., 2018, Particle filtering for stochastic Navier-Stokes signal observed with linear additive noise, SIAM Journal on Scientific Computing, Vol: 40, Pages: A1544-A1565, ISSN: 1064-8275

We consider a non-linear filtering problem, whereby the signal obeys thestochastic Navier-Stokes equations and is observed through a linear mappingwith additive noise. The setup is relevant to data assimilation for numericalweather prediction and climate modelling, where similar models are used forunknown ocean or wind velocities. We present a particle filtering methodologythat uses likelihood informed importance proposals, adaptive tempering, and asmall number of appropriate Markov Chain Monte Carlo steps. We provide adetailed design for each of these steps and show in our numerical examples thatthey are all crucial in terms of achieving good performance and efficiency.

Journal article

Marowka M, Peters GW, Kantas N, Bagnarosa Get al., 2017, Some Recent Developments in Markov Chain Monte Carlo for Cointegrated Time Series, ESAIM : Proceedings, Vol: 59, Pages: 76-103, ISSN: 1270-900X

We consider multivariate time series that exhibit reduced rank cointegration, which means a lower dimensional linear projection of the process becomes stationary. We will review recent suitable Markov Chain Monte Carlo approaches for Bayesian inference such as the Gibbs sampler of [41] and the Geodesic Hamiltonian Monte Carlo method of [3]. Then we will propose extensions that can allow the ideas in both methods to be applied for cointegrated time series with non-Gaussian noise. We illustrate the efficiency and accuracy of these extensions using appropriate numerical experiments.

Journal article

Whiteley N, Kantas N, 2017, Calculating principal eigen-functions of non-negative integral kernels: particle approximations and applications, Mathematics of Operations Research, Vol: 42, Pages: 1007-1034, ISSN: 1526-5471

Often in applications such as rare events estimation or optimal control it isrequired that one calculates the principal eigen-function and eigen-value of anon-negative integral kernel. Except in the finite-dimensional case, usuallyneither the principal eigen-function nor the eigen-value can be computedexactly. In this paper, we develop numerical approximations for thesequantities. We show how a generic interacting particle algorithm can be used todeliver numerical approximations of the eigen-quantities and the associatedso-called "twisted" Markov kernel as well as how these approximations arerelevant to the aforementioned applications. In addition, we study a collectionof random integral operators underlying the algorithm, address some of theirmean and path-wise properties, and obtain $L_{r}$ error estimates. Finally,numerical examples are provided in the context of importance sampling forcomputing tail probabilities of Markov chains and computing value functions fora class of stochastic optimal control problems.

Journal article

Beskos A, Jasra A, Kantas N, Thiery Aet al., 2016, ON THE CONVERGENCE OF ADAPTIVE SEQUENTIAL MONTE CARLO METHODS, Annals of Applied Probability, Vol: 26, Pages: 1111-1146, ISSN: 1050-5164

In several implementations of Sequential Monte Carlo (SMC) methods it is natural and important, in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such adaptive SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms [Chopin, Biometrika 89 (2002) 539–551; Jasra et al., Scand. J. Stat. 38 (2011) 1–22; Schäfer and Chopin, Stat. Comput. 23 (2013) 163–184]. There are only limited results about the theoretical underpinning of such adaptive methods: we will bridge this gap by providing a weak law of large numbers (WLLN) and a central limit theorem (CLT) for some of these algorithms. The latter seems to be the first result of its kind in the literature and provides a formal justification of algorithms used in many real data contexts [Jasra et al. (2011); Schäfer and Chopin (2013)]. We establish that for a general class of adaptive SMC algorithms [Chopin (2002)], the asymptotic variance of the estimators from the adaptive SMC method is identical to a “limiting” SMC algorithm which uses ideal proposal kernels. Our results are supported by application on a complex high-dimensional posterior distribution associated with the Navier–Stokes model, where adapting high-dimensional parameters of the proposal kernels is critical for the efficiency of the algorithm.

Journal article

Kantas N, Doucet A, Singh SS, Maciejowski J, Chopin Net al., 2015, On Particle Methods for Parameter Estimation in State-Space Models, Statistical Science, Vol: 30, Pages: 328-351, ISSN: 0883-4237

Nonlinear non-Gaussian state-space models are ubiquitousin statistics, econometrics, information engineering and signal processing.Particle methods, also known as Sequential Monte Carlo (SMC)methods, provide reliable numerical approximations to the associatedstate inference problems. However, in most applications, the state-spacemodel of interest also depends on unknown static parameters that needto be estimated from the data. In this context, standard particle methodsfail and it is necessary to rely on more sophisticated algorithms.The aim of this paper is to present a comprehensive review of particlemethods that have been proposed to perform static parameter estimationin state-space models. We discuss the advantages and limitationsof these methods and illustrate their performance on simple models.

Journal article

Kantas N, Beskos A, Jasra A, 2014, Sequential Monte Carlo methods for high-dimensional inverse problems: a case study for the Navier-Stokes equations, SIAM/ASA Journal on Uncertainty Quantification, Vol: 2, Pages: 464-489, ISSN: 2166-2525

We consider the inverse problem of estimating the initial condition of apartial differential equation, which is only observed through noisymeasurements at discrete time intervals. In particular, we focus on the casewhere Eulerian measurements are obtained from the time and space evolvingvector field, whose evolution obeys the two-dimensional Navier-Stokes equationsdefined on a torus. This context is particularly relevant to the area ofnumerical weather forecasting and data assimilation. We will adopt a Bayesianformulation resulting from a particular regularization that ensures the problemis well posed. In the context of Monte Carlo based inference, it is achallenging task to obtain samples from the resulting high dimensionalposterior on the initial condition. In real data assimilation applications itis common for computational methods to invoke the use of heuristics andGaussian approximations. The resulting inferences are biased and notwell-justified in the presence of non-linear dynamics and observations. On theother hand, Monte Carlo methods can be used to assimilate data in a principledmanner, but are often perceived as inefficient in this context due to thehigh-dimensionality of the problem. In this work we will propose a genericSequential Monte Carlo (SMC) sampling approach for high dimensional inverseproblems that overcomes these difficulties. The method builds upon Markov chainMonte Carlo (MCMC) techniques, which are currently considered as benchmarks forevaluating data assimilation algorithms used in practice. In our numericalexamples, the proposed SMC approach achieves the same accuracy as MCMC but in amuch more efficient manner.

Journal article

Jasra A, Kantas N, Ehrlich E, 2014, Approximate inference for observation-driven time series models with intractable likelihoods, ACM Transactions on Modeling and Computer Simulation, Vol: 24, ISSN: 1558-1195

In this article, we consider approximate Bayesian parameter inference for observation-driven time series models. Such statistical models appear in a wide variety of applications, including econometrics and applied mathematics. This article considers the scenario where the likelihood function cannot be evaluated pointwise; in such cases, one cannot perform exact statistical inference, including parameter estimation, which often requires advanced computational algorithms, such as Markov Chain Monte Carlo (MCMC). We introduce a new approximation based upon Approximate Bayesian Computation (ABC). Under some conditions, we show that as n → ∞, with n the length of the time series, the ABC posterior has, almost surely, a Maximum A Posteriori (MAP) estimator of the parameters that is often different from the true parameter. However, a noisy ABC MAP, which perturbs the original data, asymptotically converges to the true parameter, almost surely. In order to draw statistical inference, for the ABC approximation adopted, standard MCMC algorithms can have acceptance probabilities that fall at an exponential rate in n and slightly more advanced algorithms can mix poorly. We develop a new and improved MCMC kernel, which is based upon an exact approximation of a marginal algorithm, whose cost per iteration is random, but the expected cost, for good performance, is shown to be O(n2) per iteration. We implement our new MCMC kernel for parameter inference from models in econometrics.

Journal article

Ehrlich E, Jasra A, Kantas N, 2013, Gradient free parameter estimation for hidden Markov models with intractable likelihoods, Methodology and Computing in Applied Probability, Vol: 17, Pages: 315-349, ISSN: 1573-7713

In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihooddensity of the observation given the hidden state because of increased computationalcomplexity or analytical intractability. Instead we will assume that one may obtainsamples from this conditional likelihood and hence use approximate Bayesiancomputation (ABC) approximations of the original HMM. Although these ABCapproximations will induce a bias, this can be controlled to arbitrary precision viaa positive parameter , so that the bias decreases with decreasing . We first establishthat when using an ABC approximation of the HMM for a fixed batch of data,then the bias of the resulting log- marginal likelihood and its gradient is no worsethan O(n), where n is the total number of data-points. Therefore, when usinggradient methods to perform MLE for the ABC approximation of the HMM, onemay expect parameter estimates of reasonable accuracy. To compute an estimate ofthe unknown and fixed model parameters, we propose a gradient approach based onsimultaneous perturbation stochastic approximation (SPSA) and Sequential MonteCarlo (SMC) for the ABC approximation of the HMM. The performance of thismethod is illustrated using two numerical examples.

Journal article

Kantas N, Singh SS, Doucet A, 2012, Distributed Maximum Likelihood for Simultaneous Self-Localization and Tracking in Sensor Networks, IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 60, Pages: 5038-5047, ISSN: 1053-587X

Journal article

Jasra A, Kantas N, Persing A, 2012, Bayesian Parameter Inference for Partially Observed Stopped Processes, Statistics and Computing, Vol: n/a, Pages: n/a-n/a, ISSN: 0960-3174

In this article we consider Bayesian parameter inference associated topartially-observed stochastic processes that start from a set B0 and arestopped or killed at the first hitting time of a known set A. Such processesoccur naturally within the context of a wide variety of applications. Theassociated posterior distributions are highly complex and posterior parameterinference requires the use of advanced Markov chain Monte Carlo (MCMC)techniques. Our approach uses a recently introduced simulation methodology,particle Markov chain Monte Carlo (PMCMC) (Andrieu et. al. 2010 [1]), wheresequential Monte Carlo (SMC) approximations (see Doucet et. al. 2001 [18] andLiu 2001 [27]) are embedded within MCMC. However, when the parameter ofinterest is fixed, standard SMC algorithms are not always appropriate for manystopped processes. In Chen et. al. [11] and Del Moral 2004 [15] the authorsintroduce SMC approximations of multi-level Feynman-Kac formulae, which canlead to more efficient algorithms. This is achieved by devising a sequence ofnested sets from B0 to A and then perform the resampling step only when thesamples of the process reach intermediate level sets in the sequence.Naturally, the choice of the intermediate level sets is critical to theperformance of such a scheme. In this paper, we demonstrate that multi-levelSMC algorithms can be used as a proposal in PMCMC. In addition, we propose aflexible strategy that adapts the level sets for different parameter proposals.Our methodology is illustrated on the coalescent model with migration.

Journal article

Whiteley N, Kantas N, Jasra A, 2012, Linear Variance Bounds for Particle Approximations of Time-Homogeneous Feynman-Kac Formulae, Stochastic Processes and their Applications

This article establishes sufficient conditions for a linear-in-time bound onthe non-asymptotic variance of particle approximations of time-homogeneousFeynman-Kac formulae. These formulae appear in a wide variety of applicationsincluding option pricing in finance and risk sensitive control in engineering.In direct Monte Carlo approximation of these formulae, the non-asymptoticvariance typically increases at an exponential rate in the time parameter. Itis shown that a linear bound holds when a non-negative kernel, defined by thelogarithmic potential function and Markov kernel which specify the Feynman-Kacmodel, satisfies a type of multiplicative drift condition and other regularityassumptions. Examples illustrate that these conditions are general and flexibleenough to accommodate two rather extreme cases, which can occur in the contextof a non-compact state space: 1) when the potential function is bounded above,not bounded below and the Markov kernel is not ergodic; and 2) when thepotential function is not bounded above, but the Markov kernel itself satisfiesa multiplicative drift condition.

Journal article

Kantas N, Lecchini-Visintini A, Maciejowski JM, 2010, Simulation-based Bayesian optimal design of aircraft trajectories for air traffic management, International Journal of Adaptive Control and Signal Processing, Vol: 24, Pages: 882-899, ISSN: 1099-1115

In this paper, we consider a specific Air Traffic Management problem, where multiple aircraft in a specific region are required to reach a different target zone in the minimum expected time, while maintaining safe separation. The problem is complicated by the presence of random wind disturbances. We propose a realistic policy to automatically generate optimal and safe maneuvres for each aircraft. The parameters of the optimal policy are computed using a sequential Monte Carlo approach.

Journal article

Yang Z, Kantas N, Lecchini-Visintini A, Maciejowksi JMet al., 2010, Stable Markov decision processes using predictive control, International Symposium on Mathematical Theory of Networks and Systems (MTNS 2010)

Conference paper

Siva E, Goulart P, Maciejowski JM, Kantas Net al., 2009, Stability of model predictive control using Markov Chain Monte Carlo optimisation, 2009 European Control Conference (ECC), Publisher: IEEE, Pages: 2851-2856

We apply stochastic Lyapunov theory to perform stability analysis of MPC controllers for nonlinear deterministic systems where the underlying optimisation algorithm is based on Markov Chain Monte Carlo (MCMC) or other stochastic methods. We provide a set of assumptions and conditions required for employing the approximate value function obtained as a stochastic Lyapunov function, thereby providing almost sure closed loop stability. We demonstrate convergence of the system state to a target set on an example, in which simulated annealing with finite time stopping is used to control a nonlinear system with non-convex constraints.

Conference paper

Kantas N, Maciejowski JM, Lecchini-Visintini A, 2009, Sequential Monte Carlo for Model Predictive Control, International Workshop on Assessment and Future Directions of Nonlinear Model Predictive Control, Publisher: SPRINGER-VERLAG BERLIN, Pages: 263-+, ISSN: 0170-8643

Conference paper

Kantas N, Doucet A, Singh SS, Maciejowski JMMet al., 2009, An overview of Sequential Monte Carlo methods for parameter estimation on general state space models, 15th IFAC Symposium on System Identification (SYSID), Pages: 774-785

Conference paper

Singh SS, Kantas N, Vo B-N, Doucet A, Evans RJet al., 2007, Simulation-based optimal sensor scheduling with application to observer trajectory planning, AUTOMATICA, Vol: 43, Pages: 817-830, ISSN: 0005-1098

Journal article

Kantas N, Singh SS, Doucet A, 2007, Distributed Online self-localization and tracking in sensor networks, 5th International Symposium on Image and Signal Processing and Analysis, Publisher: IEEE, Pages: 498-+

Conference paper

Kantas N, Singh SS, Doucet A, 2006, Distributed self localisation of sensor networks using particle methods, IEEE Nonlinear Statistical Signal Processing Workshop, Publisher: IEEE, Pages: 164-+

Conference paper

Kantas N, Singh SS, Doucet A, 2006, A distributed recursive maximum likelihood implementation for sensor registration, 9th International Conference on Information Fusion, Publisher: IEEE, Pages: 797-804

Conference paper

Singh S, Kantas N, Vo B-N, Doucet A, Evans RJet al., 2005, Simulation-based optimal sensor scheduling with application to observer trajectory planning, 44th IEEE Conference on Decision Control/European Control Conference (CCD-ECC), Publisher: IEEE, Pages: 7296-7301, ISSN: 0191-2216

Conference paper

Kantas N, Parpas P, Pavliotis GA, The sharp, the flat and the shallow: Can weakly interacting agents learn to escape bad minima?

An open problem in machine learning is whether flat minima generalize betterand how to compute such minima efficiently. This is a very challenging problem.As a first step towards understanding this question we formalize it as anoptimization problem with weakly interacting agents. We review appropriatebackground material from the theory of stochastic processes and provideinsights that are relevant to practitioners. We propose an algorithmicframework for an extended stochastic gradient Langevin dynamics and illustrateits potential. The paper is written as a tutorial, and presents an alternativeuse of multi-agent learning. Our primary focus is on the design of algorithmsfor machine learning applications; however the underlying mathematicalframework is suitable for the understanding of large scale systems of agentbased models that are popular in the social sciences, economics and finance.

Working paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00669248&limit=30&person=true