The field of artificial intelligence (AI) and machine learning (ML) has prospered in recent years, particularly following the so-called deep learning revolution of the late 2000s, driven primarily by developments in academic research, the commoditization of high performance computing (e.g. in the form of GPUs), and the ever-increasing volume of digitised information.

ML techniques, such as artificial neural networks and decision tree-based ensemble methods, have been commonly used in High Energy Physics (HEP) for more than 30 years. Perhaps the most high-profile example in recent times is the use of ML technologies by experiments at the Large Hadron Collider (LHC) to aid the discovery of the Higgs boson. More recently, the "deep learning" paradigm shift is now fundamentally changing how scientific research is performed within our field. It has brought new sophisticated techniques to the forefront of the physicist's toolset, which is somewhat fortuitous because the latest generation of HEP experiments face Exa-scale computing and data challenges. Modern AI/ML is ideally suited to tackling these issues.

In response, a large community of physicists and computer scientists has grown to solve HEP-orientated problems through the application of ML techniques. The most common applications involve classification and regression tasks within the context of reconstructing the physics processes recorded by particle detectors. Another example of a rich area of research is the use of adversarial networks as an alternative approach to Monte Carlo techniques to simulate the electronic response of detectors to the passage of particles through them. While the application of ML within HEP is relatively mature, there remain many interesting challenges. Some examples include: understanding how to treat systematic uncertainties while employing ML models; how to interpret what the models learn; and how to deploy ML algorithms within real-time decision-making hardware while satisfying severe constraints on resources and latency. The Imperial HEP group is participating at the forefront of these research activities and more.

Further, experts within the HEP field are highly engaged with the wider AI/ML community, through regular seminars and workshops hosted by e.g. the Inter-experimental LHC Machine Learning working group at CERN, through OpenData initiatives such as the Higgs and Track ML Kaggle challenges, and through dedicated training schools such as the Yandex Machine Learning school for High-Energy Physics. The Imperial HEP group was instrumental in fostering the strong partnership with Yandex and a residential two-week ML school is held annually within the group. The group also regularly holds seminars on state-of-the-art ML research and has strong links with ML experts across College, including the Departments of Physics, Maths, and Computing, as well as the Data Science Institute and AI Network.

Imperial contribution

The Imperial HEP group is involved in a wide range of machine learning activities, which broadly fall into two categories: applications within Data Analysis and real-time hardware systems, dubbed AI on Chip.