Title: Bayesian inference with the mysterious Plug & Play priors encoded by neural networks, application to imaging sciences.

Abstract: Play & Play (PnP) methods have become ubiquitous in Bayesian imaging sciences. These methods algorithmically combine an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. The PnP algorithms proposed in the literature mainly differ in the iterative schemes they use for optimisation or for sampling. In most cases, there are no theoretical guarantees on the delivered solution, or only under assumptions that are unrealistic. There also remain important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support these numerical schemes. To address these limitations, this talk presents theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP Unadjusted Langevin Algorithm for Monte Carlo sampling and MMSE inference, and 2) PnP Play Stochastic Gradient Descent for MAP inference. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two algorithms under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks.  We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.