On computational barriers in mathematics of information and instabilities in deep learning for inverse problems: Anders Hansen (U. Cambridge)

 

Modern mathematics of information relies heavily on computing minimisers of optimisation problems such as linear programming, constrained and unconstrained Lasso, Basis Pursuit etc. We will discuss the following potentially surprising issue. When given irrational inputs or using floating point arithmetic, we have the following phenomenon. For any of the mentioned optimisation problems, and given any integer K > 2, there exists a class of well conditioned inputs, such that: (1) there does not exist any algorithm that can produce a minimiser with K correct digits. (2) However, there does exist an algorithm that can produce K-1 correct digits, but any algorithm doing so will use arbitrarily long time. (3) Yet, the problem of computing K-2 digits is in P, that is, there exists an algorithm providing a solution with K-2 correct digits requiring polynomial runtime in the number of variables. 
A seemingly unrelated problem is the phenomenon of instabilities in deep learning. It is a well documented issue that deep learning for the classification problem becomes unstable. We will discuss how deep learning for inverse problems in general also becomes unstable. Paradoxically, despite the mentioned instabilities, it is possible to produce neural networks that are completely stable for Magnetic Resonance Imaging (MRI). We will show how such neural networks require no training (no deep learning), have recovery guarantees, and that there does exist efficient algorithms to compute them. The existence of such networks are intimately related to the (K,K-1,K-2)-phenomenon described above regarding existence of algorithms for optimisation problems in modern mathematics of information.