Reinforcement Learning for flow control

Model-free Reinforcement Learning algorithms have been employed recently to discover flow control strategies, for example for efficient drag reduction by suppressing vortex-shedding in the wake of a circular cylinder. These methods rely on probes located in the flow downstream of the body to achieve full-state observability and control.

The present approach considers real-world applicability by restricting sensing to pressure probes mounted on the base of a square bluff body. Surface mounted sensing is shown to restrict observability over the flow and reduce drag reduction performance by 65% compared to probes optimally located downstream of the body. A method integrating memory into the control architecture is proposed to improve drag reduction performance in partially observable systems.

Memory is integrated by augmenting the input to the controller with a time series of lagged observations. A power expenditure study shows that the active drag reduction strategies discovered with Reinforcement Learning are extremely power efficient.

These results are a first step towards realistic implementation of reinforcement learning for active drag reduction in the type of partially observable systems often found in the real-world. 

For further details please contact dd-aerospace-eng-research-centre@imperial.ac.uk and Dr George Rigas.