APDEs Seminar

In this talk, we consider an optimal control problem for a conditioned process. More precisely, the problem is to control a stochastic process so as to minimize an expected cost conditioned on the process not exiting a given domain. The conditioning induces a non-linearity with respect to the law of the controlled process. When the optimization is done over controls of feedback type (i.e., depending only on the state of the process), the optimal solution can be characterized by a system of two partial differential equations (PDEs) of mean field type: a forward (Kolmogorov-Fokker-Planck) equation and a backward (Hamilton-Jacobi-Bellman) equation, both with Dirichlet boundary conditions. They describe respectively the evolution of the distribution and the evolution of the value function. In the long time asymptotics, the situation is described by a control problem driven by the principal eigenvalue problem associated to a Fokker-Planck PDE with Dirichlet boundary condition. This talk will focus on particular aspects of the theory and numerical results. This is based on joint work with Yves Achdou (Paris University) and Pierre-Louis Lions (College de France).