Title:

Approximation of high-dimensional functions: from polynomials to deep neural networks

Abstract:

Driven by numerous applications in modern computational science – in particular, Uncertainty Quantification – the approximation of smooth, high-dimensional functions has received renewed attention over the last two decades. This problem is rendered challenging by not only the famous curse of dimensionality, but also the limited amount of data commonly available in applications. Nevertheless, in the last five to ten years, the introduction of techniques based on sparse polynomial expansions has led to new progress towards overcoming these challenges. In the first part of this talk I will give a survey of recent developments in this area. In particular, I will show how the proper use of compressed sensing tools leads to new methods for high-dimensional approximation that can often mitigate the curse of dimensionality to a substantial extent. The focus of the second part of this talk is on recent machine learning-inspired approaches for high-dimensional approximation. Deep Neural Networks (DNNs) are currently objects of significant interest in the computational science community. I will overview recent theoretical and numerical results comparing these techniques with compressed sensing, and discuss both the challenges and the potential of practical high-dimensional approximation with DNNs.