Abstract

The fundamental principle facilitating learning is the capability to make assumptions. The science of machine learning is about developing methodologies that allows us to formulate assumptions into explicit mathematical objects (modelling) and integrate them with observed data (inference). To facilitate learning in more domains we continuously strive to make stronger and stronger assumptions such that we can become more data efficient.

One of the most challenging scenarios is that of unsupervised learning where we are aiming to explain the data independent of task. This being a very ill-constrained problem which requires strong assumptions to provide a satisfactory explanations. In this lecture I will focus on Gaussian process priors which are objects that allows us to specify structure on continuous infinite parameter paces. We will discuss the use of these priors for latent variable models and the mathematical tools that are needed to combine our assumptions with data.

I will try to provide motivation and intuitions behind these models and show why I think they are becoming ever more important as a tool that could provide understanding of assumptions needed when learning composite functions.