Event image

Abstract

We propose a fast method with statistical guarantees for learning anexponential family density model where the natural parameter is in areproducing kernel Hilbert space, and may be infinite dimensional. Themodel is learned by fitting the derivative of the log density, thescore, thus avoiding the need to compute a normalization constant. Weimproved the computational efficiency of an earlier solution with alow-rank, Nystr”om-like solution. The new solution retains theconsistency and convergence rates of the full-rank solution (exactly inFisher distance, and nearly in other distances), with guarantees on thedegree of cost and storage reduction. We evaluate the method inexperiments on density estimation and in the construction of an adaptiveHamiltonian Monte Carlo sampler. Compared to an existing score learningapproach using a denoising autoencoder, our estimator is empiricallymore data-efficient when estimating the score, runs faster, and hasfewer parameters (which can be tuned in a principled and interpretableway), in addition to providing statistical guarantees.

Speaker Bio

Heiko Strathmann is PhD student at the Gatsby unit for computationalNeuroscience and machine learning. His research focuses on kernelmethods with applications to Monte Carlo and statistical testing, aswell as on Bayesian inference in general. He is active in theopen-source community as a core maintainer of the Shogun machinelearning toolbox, and he delivers predictive analytics for energyindustry with Swhere Ltd.