## Overview

In a world of increasing complexity, how can engineers design cutting-edge technology that performs at its best and is provably robust to changes in its operating environment?

My research seeks to address this challenge through the development of innovative mathematical and numerical techniques for studying nonlinear differential equations.

This type of equations model numerous engineering and physical systems—ranging from wind turbines and heat exchangers to ocean currents and turbulent fluid flows—but are often hard to solve, even on modern supercomputers. Instead of trying to calculate accurate explicit solutions, therefore, my work attempts to prove rigorous bounds on key system properties, such as the mean heat flux through a heat exchanger or the amount of energy dissipated by turbulence in a pipe. This is achieved through an interdisciplinary approach, which combines mathematical analysis and dynamical systems theory with tools from the traditionally separate fields of polynomial optimization and semidefinite programming.

To learn more about my work and research interests, please expand the tabs below or have a look at my list of my publications.

## Studying nonlinear dynamics via auxiliary functions

One of my long-term goals is to use optimisation to answer questions regarding the average or extreme behaviour of complex physical or engineering systems, such as "How much heat can atmospheric currents transport on average?" or "What is the largest force on a wind turbine blade due to wind gusts?", which cannot be easily answered by traditional numerical simulations.

My work attempts to achieve this goal by placing rigorous bounds on the average or extreme behaviour of solutions to nonlinear differential equations. This can be done by constructing so-called *auxiliary functions*, which generalise of the Lyapunov function used in control theory to prove the nonlinear stability of equilibrium states. The crucial observation is that the construction of auxiliary functions can often be posed as an optimization problem with polynomial inequalities that, in principle, can be solved computationally thanks to a powerful connection between *sum-of-squares polynomials* and a branch of convex optimization called *semidefinite programming*.

In collaboration with David Goluskin, I have demonstrated that numerically optimised auxiliary functions can produce sharp bounds on the average energy of solutions to the Kuramoto-Sivashinsky equation and on extreme events. Moreover, in a joint work with David Goluskin, Sergei Chernyshenko and Deqing Huang I have extended the auxiliary framework to stochastic systems. Finally, together with Sergei Chernyshenko and Mayur Lakshmi I have shown that auxiliary functions can be used to identify solutions of a nonlinear differential equation that optimize a given objective function (for instance, energy) in a time-averaged sense.

I am currently exploring new ways in which auxiliary functions can be used and computational approaches to optimize auxiliary functions for systems governed by nonlinear partial differential equations.

## Rigorous scaling laws in fluid mechanics

Turbulent flows are notoriously difficult to analyse mathematically and simulate numerically. A particular challenge is to determine how key properties of the flow, such as the mean energy dissipation in a pipe or the average amount of heat trans- ported by natural convection, depend on the flow's governing parameters, such as the Reynolds number.

My work leverages convex optimization in order to prove new scaling laws for turbulent fluid flows, which are derived directly from the Navier-Stokes equations without introducing physically reasonable, but ultimately unproven, assumptions. I have demonstrated that the so-called *background method* for bounding the properties of turbulent flows, introduced by Doering & Constantin in the 1990s, can be implemented computationally using algorithms for semidefinite programming (see this review paper for details). This often improves bounds proved by hand by over an order of magnitude.

Most importantly, however, the numerical results can be used to inspire the proof of new rigorous scaling laws. I have done this for convection driven by surface tension (joint work with C. Nobili and A. Wynn), convection driven by internal heating (joint work with A. Arslan, J. Craske, A. Wynn and A. Kumar), and rotating convection with Ekman pumping (joint work with B. Pachev, J. Whitehead and I. Grooms).

## Large-scale semidefinite and polynomial optimization

I am interested in developing scalable methods to solve a type of optimization problems known as semidefinite programs (SDPs), as well as SDP relaxations of intractable polynomial optimization problems.

**Fast first-order solvers: **Alongside my collaborators Yang Zheng, Antonis Papachristodoulou, Paul Goulart and Andrew Wynn, I have developed fast first-order algorithms based on the *alternating direction method of multipliers* (ADMM) for solving large-scale SDPs characterized by either* aggregate sparsity* or *partial orthogonality*. Aggregate sparsity often arise in the study of networked systems and from semidefinite relaxations of intractable optimization problems over graphs, while partial orthogonality is typical of SDP coming from moment-SOS relaxations of polynomial optimization problems. By exploiting these special problem structures, our prototype solver CDCS considerably outperforms state-of-the-art interior-point algorithms on a broad range of problems.

**Sparsity in polynomial matrix inequalities: **In collaboration with Yang Zheng, I have proved *sum-of-squares chordal decomposition* theorems for sparse polynomial matrices that extend well-known decomposition results for polynomial matrices. We have obtained sparse-matrix versions of many *Positivstellensätze*—theorems on the representability of positive polynomials as sums of squares—including classical results due to Artin, Reznick, and Putinar. These theoretical results guarantee the convergence of sparsity-exploiting hierarchies of sum-of-squares relaxations for optimization problems with polynomial matrix inequalities constraints, which are often encountered in the field of optimal control.

## Global minima for integral variational problems

Many physical systems evolve in order to minimize a suitable energy function. For instance, the microstructure of a material and the deformation of a structure under externally applied forces result from minimising the total internal strain energy.

Recently, I have become interested in the problem of finding the global minimum and the corresponding global minimizers of energy functions in the form of integral functionals. The existence of these global minimizers can often be proven using standard techniques from the calculus of variations. However, finding them in practice is usually a challenge because many energy functions are not convex and, consequently, have multiple local minima.

In collaboration with Alexander Chernyavsky, Jason Bramburger, David Goluskin, and Ian Tobasco, I have developed a relaxation technique that leverages polynomial optimization to bound the global minimum of an energy function rigorously from below. We have also proven that for certain types of energy functions this approach is *sharp*, meaning that bounds converge to the exact global minimum. Check our our paper to learn more about these results.

Together with Federico Fuentes, instead, I am investigating whether global minimizers can be approximated directly by combining traditional finite-element methods with sum-of-squares relaxations. This is still work in progress, but so far we have proven that our approach is guaranteed to work for a broad class of energy functions (namely, *quasiconvex* ones), as well as for the optimal control of semilinear elliptic partial differential equations.