For more information on my research, teaching, and myself, see my website.
In short, I work on:
■ Methods that are helpful for 1) prediction problems with limited and/or noisy data, 2) intelligent gathering of data, experimental design and active learning, 3) decision making under uncertainty.
■ Making neural networks 1) more robust by improving estimates of their uncertainty when making predictions, and 2) more automatic, by creating methods that automatically tune hyperparameters. In the long term, I hope to develop a convenient way that allows the structure of a neural network to be learned just as easily as the weights. Statistical analysis, Bayesian inference and information theory underly the approaches that I take.
If any of this interests you, do reach out. I welcome new connections to people from industry who are interested in applying my methods and making them work really well in practice. If you are interested in doing a PhD, please get in touch after reading my website.
et al., 2021, BNNpriors: A library for Bayesian neural network inference with different prior distributions, Software Impacts, Vol:9, ISSN:2665-9638, Pages:100079-100079
Artemev A, Burt DR, van der Wilk M, 2021, Tighter Bounds on the Log Marginal Likelihood of Gaussian Process Regression Using Conjugate Gradients, Proceedings of the 38th International Conference on Machine Learning (icml), Vol:139, Pages:362-372
Ober SW, Rasmussen CE, Wilk MVD, 2021, The Promises and Pitfalls of Deep Kernel Learning, Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (uai)
Garriga-Alonso A, Wilk MVD, 2021, Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks, Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (uai)
et al., 2021, Understanding Variational Inference in Function-Space, Third Symposium on Advances in Approximate Bayesian Inference