**please see http://yingzhenli.net/home/en/ for details**
I am passionate about building reliable machine learning systems that can generalise to unseen environment, and my approach combines both Bayesian statistics and deep learning.
My current research interests are two-fold and I use probabilistic ML methods to develop them:
1. Trustworthy ML models: uncertainty quantification, robustness, explainable ML, decision-making, adaptive methods (e.g., continual learning, model editing), etc.
2. Generative modelling: sequential generative models on both sequence and static data (e.g., video, spatiotemporal data, general time-series, images, tabular data, etc.), and causal representation learning with generative models.
I have worked (and continue to work) extensively on approximate inference with applications to Bayesian deep learning and deep generative models. My work on this subject has been applied in industrial systems and implemented in deep learning frameworks (e.g. Tensorflow Probability and Pyro). To understand more about this subject, see my tutorial on approximate inference at NeurIPS 2020.
Before joining Imperial, I was a senior researcher at Microsoft Research Cambridge, and previously I have interned at Disney Research. I received her PhD in engineering from the University of Cambridge, UK.
et al., Sparse Uncertainty Representation in Deep Learning with Inducing Weights, Neural Information Processing Systems
et al., 2021, Active Slices for Sliced Stein Discrepancy, International Conference on Machine Learning (ICML), JMLR-JOURNAL MACHINE LEARNING RESEARCH, ISSN:2640-3498
et al., 2021, Learning Sparse Sentence Encoding without Supervision: An Exploration of Sparsity in Variational Autoencoders, Joint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP), ASSOC COMPUTATIONAL LINGUISTICS-ACL, Pages:34-46
et al., 2021, Meta-Learning Divergences of Variational Inference, 24th International Conference on Artificial Intelligence and Statistics (AISTATS), MICROTOME PUBLISHING, ISSN:2640-3498