The goal of the workshop is to gather experts from different areas in this inter-disciplinary field to investigate and discuss how to harness the power of machine learning techniques to solve high-dimensional, non-linear partial differential equations (PDEs), as well as how to leverage the theory of PDEs to construct better machine learning models and study their theoretical properties.

PDEs are a dominant modelling paradigm ubiquitous throughout science, from fluid dynamics to quantum mechanics, to calculus of variations and quantitative finance. When the PDEs at hand are low dimensional (dim=1,2,3,4) they can generally be solved numerically leveraging a large arsenal of techniques developed over the last 150 years, including finite difference and finite elements methods.

Nonetheless, many PDEs arising from complex, real world financial-engineering-physical problems are often so high-dimensional (sometimes even infinite dimensional) that classical numerical techniques are either not directly applicable or do not scale to high-resolution computations. Examples of such intractable equations include pricing and hedging with rough volatility price dynamics, non-Markovian, path-dependent stochastic control problems, and turbulent fluid flow dynamics to be solved on very fine scales.

Recent advances in Machine Learning (ML) have enabled for the development of novel computational techniques for tackling PDE-based problems considered unresolvable with classical methods. Physics-informed neural networks, neural differential equations, and neural operators are among the most popular models used to tackle PDE-related problems with deep learning.

The goal of this workshop is to develop a classification of ML techniques depending on the type of PDE and to set clear new directions in the design of optimal numerical schemes, both numerically and theoretically (with convergence results). The list of participants is designed to maximise inter-disciplinarity and encourage diversity with experts in different fields, such as stochastic analysis, numerical analysis, mathematical finance and machine learning.

Co-organisers

Dr A. Jacquier (Imperial College), Prof. J. Ruf (LSE) and Dr C. Salvi (Imperial College).

Please contact the organisers if you are interested in attending the workshop.

Funding sources

EPSRC, LSE, Imperial.

Confirmed speakers

Name Affiliation
Anastasia Borovykh Imperial College London
Stefania Fresca Politecnico di Milano
Camilo Garcia Trillos University College London
Jack Jacquier Imperial College London
Dante Kalise Imperial College London
Athena Picarelli University of Verona
Johannes Ruf  London School of Economics
Cris Salvi Imperial College London
Yuri Saporito FGV EMAp

 

Titles and abstracts

Anastasia Borovykh
Title: On the choice of loss functions and initializations for deep learning-based solvers for PDEs
Abstract: In this talk we will discuss several challenges that arise when solving PDEs with deep learning-based solvers. We will begin with defining the loss function of a general PDE and discuss how this choice of loss function, and specifically the weighting of the different loss terms, can impact the accuracy of the solution. We will show how to choose an optimal weighting that corresponds to accurate solutions. Next, we will focus on the approximation of the Hamilton-Jacobi-Bellman partial differential equation associated to optimal stabilization of the NonLinear Quadratic Regular Problem. It is not obvious that the neural network will converge to the correct solution with just any type of initialisation; this is particularly relevant when the solution to the HJB-PDE is non-unique. We will discuss a two-step learning approach where the model is pre-trained on a dataset obtained from solving a state-dependent Riccati equation and we show that in this way efficient and accurate convergence can still be obtained.

Stefania Fresca
Title: Deep learning-based reduced order models for scientific applications
Abstract: The solution of differential equations by means of full order models (FOMs), such as, e.g., the finite element method, entails prohibitive computational costs when it comes to real-time simulations and multi-query routines. The purpose of reduced order modeling is to replace FOMs with suitable surrogates, so-called reduced order models (ROMs), characterized by much lower complexity but still able to express
the physical features of the system under investigation. Conventional ROMs anchored to the assumption of modal linear superimposition, such as proper orthogonal decomposition (POD), may reveal inefficient when dealing with nonlinear time-dependent parametrized PDEs, especially for problems featuring coherent structures propagating over time. To enhance ROM efficiency, we propose a nonlinear approach to set ROMs by exploiting deep learning (DL) algorithms, such as convolutional neural networks (CNNs). In the resulting DL-ROM, both the nonlinear trial manifold and the nonlinear reduced dynamics are learned in a non-intrusive way by relying on DL algorithms trained on a set of FOM snapshots, obtained for different parameter values. Furthermore, in case of large-scale FOMs, a former dimensionality reduction on FOM snapshots through POD enables to speed-up training times and to substantially decrease the network complexity. Accuracy and efficiency of the DL-ROM technique are assessed in different scientific applications aiming at solving parametrized PDE problems, e.g., in cardiac electrophysiology, computational mechanics and fluid dynamics, possibly accounting for fluid-structure interaction effects, where new queries to the DL-ROM can be computed in real-time. Finally, with the aim of moving towards a rigorous justification on DL-ROMs mathematical foundations, error bounds are derived for the approximation of nonlinear operators by means of CNNs. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture.

Camilo Garcia Trillos
Title: Neural network approximation of some second order semilinear PDEs
Abstract: Since its inception in the early 90s, the well-known connection between second-order semi-linear PDEs and Markovian BSDEs has been useful in creating numerical probabilistic methods to solve the former. Our approach to the solution of these PDEs belongs to a recent stream in the literature that uses neural networks together with the BSDE connection to define numerical methods that are robust and efficient in large dimensions. In contrast with existing works, our analysis focuses on the case where derivatives enter ‘quadratically’ in the semilinear term, covering some interesting cases in control theory. In this setting, we study both forward and backward-types of neural network based methods. Joint work with Daniel Bussell.

Dante Kalise
Title: Data-driven schemes for Hamilton-Jacobi-Bellman equations
Abstract: In this talk I will discuss different computational aspects arising in the construction of data-driven schemes for HJB PDEs. First, I will discuss synthetic data generation through representation formulas including Pontryagin’s Maximum Principle and State-dependent Riccati Equations. This data can be used in a regression framework for which we consider different approximation architectures: polynomial approximation, tensor train decompositions, and deep neural networks. Finally, I will address the role of synthetic data in the framework of physics-informed neural networks.

Athena Picarelli
Title: A deep solver for BSDEs with jumps
Abstract: The aim of this work is to propose an extension of the Deep BSDE solver by Han, E, Jentzen (2017) to the case of FBSDEs with jumps. As in the aforementioned solver, starting from a discretized version of the BSDE and parametrizing the (high dimensional) control processes by means of a family of ANNs, the BSDE is viewed as model-based reinforcement learning problem and the ANN parameters are fitted so as to minimize a prescribed loss function. We take into account both finite and infinite jump activity, introducing in the latest case, an approximation with finitely many jumps of the forward process. (joint work with A. Gnoatto and M. Patacca)

Yuri Saporito
Title: Gradient Boosting for Solving PDEs
Abstract: Several Deep Learning methods have been successfully applied to solve several PDEs with many interesting complexities (high-dimensional, non-linear, system of PDEs, etc). However, DL usually lacks proper statistical guarantees and convergence is usually just verified in practice. In this talk, we propose a Gradient Boosting method to solve a class of PDEs. Although still preliminary, there is some hope to derive proper statistical analysis of the method. Numerical implementations and examples will be discussed.