Title
Deep Neural Network Convergence for Variational Inequalities
Abstract
Motivated by recent progresses in applying deep neural networks to high-dimensional PDEs, we propose an approach to apply them on linear parabolic variational inequalities. We begin by developing a deep neural network framework that approximates the solution in a bounded domain, using loss functions that directly incorporate the variational inequality on the whole domain, so there is no need to determine the stopping region in advance. Crucially, we design the loss function by the inverse trace theorem, and prove the existence of neural networks whose losses converge to zero. Moreover, we prove the functional convergence in the Sobolev space $H^{0,1}(\Omega_T)$. To align with most optimal stopping problems, we extend these results to unbounded domains, ensuring that our neural network surrogates satisfy boundary conditions while maintaining convergence guarantees.
We then apply our approach to a specific mixed optimal stopping and control problem in finance. By leveraging duality, the nonlinear HJB-type operator of the primal problem is converted into a linear parabolic operator in the dual formulation. A key step in this process is to prove the convergence of the primal value function from the dual neural network solution — an outcome made possible by our Sobolev norm analysis. After the theoretical convergence analysis, we illustrate the versatility and accuracy of our method with numerical experiments for both power and non-HARA utilities, and discuss practical considerations such as domain truncation and sampling strategies. Our results underscore the potential of deep neural networks as a reliable and efficient tool for variational inequalities in optimization and control problems.
Bio
Yun Zhao is a doctoral student in the Department of Mathematics at Imperial College London.