Event image

Abstract:

This will be an introductory tutorial to Markov chain Monte Carlo methods, with application to probabilistic modelling and Bayesian inference. Monte Carlo is the jargon used to describe statistical sampling as a way to characterize a distribution. For example, how many neutrons might be generated in a nuclear reaction? Simulate it and see. Simulate it multiple times, and see the distribution of possibilities.

Given a probabilistic model, we want to describe distributions conditioned on data. What’s the distribution over what will happen next? What’s our posterior distribution/belief about the processes that generated the data we saw? It is often difficult to draw samples from these probability distributions directly.

Markov chain Monte Carlo (MCMC) creates a fictitious physical system to explore complex probability distributions. A Markov chain can explore the space of explanations of our data, which we can use to make predictions about what might happen next. It is easy to construct a Markov chain that is ‘valid’, that explores the correct distribution asymptotically. I will discuss some of the choices that govern whether samples are generated quickly. I will also discuss what to do with these samples once you have them.

Bio:

Iain Murray is a SICSA Lecturer in Machine Learning at the University of Edinburgh. He moved into Machine Learning from Physics after taking David MacKay’s undergraduate course in Cambridge. He obtained his PhD in 2007 from the Gatsby Computational Neuroscience Unit at UCL, under Zoubin Ghahramani. He was a commonwealth fellow in Machine Learning at the University of Toronto, before moving to Edinburgh in 2010. His research interests include building flexible probabilistic models of data, that can be applied widely: to cosmology, images, neuroscience, perception, speech, sports, text, and beyond. Iain has developed several Markov chain Monte Carlo (MCMC) methods for performing inference in these models.