AI in healthcare:
moving into practice

In the past five years, AI in healthcare has progressed rapidly from concept to
deployment. It is now set to reshape almost every aspect of health and medicine,
spanning fields such as drug discovery, clinical decision-making, preventive
medicine, and public health.

by Ian Mundell for Imperial Enterprise – April 2026

Digital illustration - EEG readout on background of zeroes and ones.

Healthcare was identified early on as an area where artificial intelligence (AI) could rapidly offer tangible benefits, automating and even improving upon the problem-solving carried out by doctors. The earliest diagnosis support systems trained on data appeared in the 1970s, with AI for medical imaging analysis emerging in the 2010s.

In recent years, that promise has gathered dramatic momentum as AI tools have been tested in clinical settings and moved towards regulatory approval. Perhaps even more impressive has been the way that AI has been used behind the scenes as a tool to develop medical technologies. For example, once you have AI to turn medical scans into digital models of the human heart, you can build another tool that asks how that heart is likely to respond to different treatments and lifestyle choices. Or yet another that looks for new drug targets. 

Meanwhile, direct adoption of AI for healthcare has increased, spurred on by increasing familiarity with digital healthcare and the advantages it can bring. Convenience and cost-saving in consulting rooms or pharmaceutical labs are often the ice-breakers, but the more revolutionary uses of AI, such as discovering new drugs and bringing them to market, still remain to be fully realised. 

Imperial College London is working intensively at the interface between medical science and computing, and helping to move innovations in many areas of healthcare AI into clinical use together with partners and investors.

Clinical decision-making

Diagnostic AI systems are most straightforward to create when a doctor wishes to define a specific clinical condition, such as a risk of heart disease or a suspected cancer. But AI is also being used to inform the very first contacts between a patient and a medical professional, when the health issues are unknown.

Timing is crucial in this situation. Once a doctor such as a general practitioner (GP) starts to talk with a patient, they start to form ideas about a primary diagnosis and appropriate actions. Any advice from an AI that comes after this point has to contend with the doctor’s unconscious commitment to this line of thinking and so may have limited influence. “So the challenge is how you start to influence decision making at the very point or even before a patient comes to the clinician,” says Professor Brendan Delaney, who holds a chair in medical informatics and decision-making in Imperial's Department of Surgery and Cancer, as well as practicing as a GP.

He had been working on AI diagnosis and decision support for some years when he was approached by Dr Steven Charlap, the founder of SOAP Health, a US startup that had developed an AI doctor able to take a complete medical history from a patient in advance of a consultation. Not only was this the right timing for the tools Professor Delaney was working on, but it also gathered valuable data for his systems to analyse.

“When you have a full set of information, you can deploy some quite rich differential diagnosis models,” he says. “With those two elements, you can tell the clinician: this is what the patient is complaining about, these are the things that might be wrong with them, and here are some suggestions for what you might do.”

You can tell the clinician: this is what the patient is complaining about, these are the things that might be wrong with them, and here are some suggestions for what you might do.

Taking this forward involves both technical and regulatory challenges, with Europe and the US having different approaches to market approval. “For the AI doctor, we are about to do a clinical study in the UK comparing the history taken by the AI with a history taken by a doctor, in real patients in practices,” Professor Delaney says.

For the US, SOAP had developed its own personal health advisor, called medome.ai, built on generative AI, but this approach would not be possible in the mores tightly regulated environment in the UK and EU. So, Professor Delaney, Professor Alessandra Russo and Professor Mauricio Barahona, along with colleagues at Queen Mary University of London (QMUL), are working on Imperial’s own primary care diagnostic AI, based on neuro-symbolic AI, an approach that is explainable and less prone to hallucination. This is being trained and validated on routine GP data from north-west London and Sussex, and nationally from Clinical Practice Research Datalink.

“We’ve trained the model to be able to predict lung cancer from primary care patient data collected up to three years before a lung cancer diagnosis, and we now have quite a good level of prediction.” QMUL is collaborating with Imperial on pancreatic cancer, and work is now under way to include other cancers and other diagnoses. Finally, a layer of formal logic will be added to make the model’s decisions more explainable to patients.

We’ve trained the model to be able to predict lung cancer from primary care patient data collected up to three years before a lung cancer diagnosis, and we now have quite a good level of prediction.

Other AI decision-making tools developed at Imperial are also making progress. For example, the AI Clinician developed by Professor Faisal’s group, along with Professor Anthony Gordon in the Department of Surgery and Cancer, has now been installed in four UK hospitals, where it is routinely offering doctors treatment recommendations in conditions such as sepsis. To do this, the AI creates a patient’s digital twin, enabling it to ask “what if” questions about treatments and to automatically optimise drug selection and dosages while the patients is cared for in hospital. Meanwhile, a partnership with the Children's Hospital of Orange County in the US is adapting the system to help make complex decisions and provide faster and more accurate treatment recommendations for critically ill children.

At Imperial’s National Heart and Lung Institute, Professor Jamil Mayet and Dr Amit Kaura are testing an AI system that can tell whether a patient arriving in accident and emergency with acute chest pain is having a heart attack. At the moment, a single blood test is used to determine admission, but a lot of false alarms still get through, putting a strain on already stretched A&E resources. By looking at a panel ten readily available clinical features, the AI system promises to rule out more of these cases where a heart attack is not the cause of the chest pains.

Screen with images of the heart

SmartHeart turned medical images into 3D models.

SmartHeart turned medical images into 3D models.

Imaging and diagnostics

Medical imaging was identified early as an area where AI could add value, for example supporting radiologists in reading and interpreting magnetic resonance imaging (MRI) scans for the diagnosis of cancers or heart disease. Having demonstrated that such systems can work, the focus now is on optimising them.

Dr Chen Qin in the Department of Electrical and Electronic Engineering, for example, is working on systems to increase the trustworthiness and improve failure management in AI-mediated MRI scans, equipping them with the ability to model uncertainty and handle cases that fall outside the usual range of results. She is also developing machine learning models that address patient motion in MRI scans, a limiting factor when it comes to scanning infants and children.

Motion was also a challenge in the multi-centre SmartHeart project, led by Professor Daniel Rueckert from the Department of Computing, which developed AI methods that could correct for the movement of the beating heart, along with other tools that promise to make MRI heart scans faster, more efficient, and more accessible for patients.

Computational image of a heart

A computational model of the left ventricle of the heart using data from 80,000 participants in UK Biobank showing the effect of genetic variants on heart function. Image: Declan O'Regan

A computational model of the left ventricle of the heart using data from 80,000 participants in UK Biobank showing the effect of genetic variants on heart function. Image: Declan O'Regan

And some highly impressive work with AI and medical imaging is pursuing possibilities that arise once AI has worked on the scans. Professor Declan O'Regan, for example, started out working on AI systems to create 3D digital models of the human heart from MRI scans. “We’ve now done that for 80,000 people in the UK Biobank, which makes it the largest repository of these three-dimensional models of the heart in the world,” he says.

His group at the MRC Laboratory of Medical Sciences went on to design AI tools to look for connections between the heart models and other data in the Biobank, from health histories to genetic and metabolic profiles. This can reveal important new information about the causes of heart disease and allows personalised predictions to be made from a patient’s MRI scan. “The patterns learned by the AI allow it to simulate what might be the cause of that patient’s disease and how that disease might evolve in the future,” Professor O’Regan explains.

The patterns learned by the AI allow it to simulate what might be the cause of that patient’s disease and how that disease might evolve in the future.

The models can also be asked “what-if” questions about the effect a treatment or an operation might have on an individual with a specific heart history. “Doing these in silico, counterfactual experiments is a really powerful way of understanding how interventions have an effect on the heart, without having to do any experiments or trials on patients.”

The heart models and all the connected data can also be used to identify new drug targets, not just for heart disease but for ageing more generally. This is a line of thinking Professor O’Regan is pursuing in collaboration with pharmaceutical companies such as Bayer and Calico Labs.

“Ageing is the main risk factor for cardiovascular disease, so if we can understand what genetic differences mean that people age well, or that have accelerated ageing, then both of those could potentially help us develop targets for preventative treatments, so that people have a longer period of healthy life before age-related disease start,” he says.

Using AI to connect images and data is also behind the ‘virtual biopsy’ method developed by Professor Eric Aboagye’s group in the Department of Surgery and Cancer. The system merges computed tomography (CT) scans with information about the chemical make-up of tumours and normal lung tissue, allowing lung cancer types to be classified without the need for an invasive biopsy. This in turn leads to more reliable predictions about patient outcomes.

A man blows into a baloon

SpiraCheck uses AI to analyse the volatile organic compounds in a person’s breath.

SpiraCheck uses AI to analyse the volatile organic compounds in a person’s breath.

Non-invasive cancer diagnosis is also the aim of SpiraCheck, a spinout from Professor George Hanna’s group in the same department. It is applying AI to analyse the volatile organic compounds in a person’s breath, looking for patterns that point to different kinds of cancer. A positive result would allow patients to be fast-tracked for more intensive tests and possible treatment.

Professor O’Regan’s group has patented the individualised prediction of outcome in heart failure, and carried out a study to understand how that technology might work in clinical practice. The next step would be to work with an instrument manufacturer to integrate the model into a scanner. “The technology could analyse images of a patient’s heart and make an individualised prediction of how their health will change and what their expected mortality might be, while they are having the scan,” he says.

Professor Declan O'Regan

Professor Declan O'Regan

Professor Declan O'Regan

Predictive and preventive medicine

One of the tasks that AI is good at is finding patterns in data that are hard for clinicians to see. This is the idea behind work with electrocardiograms (ECGs) carried out by Dr Arunashis Sau and Professor Fu Siong Ng from the National Heart and Lung Institute. 

An ECG measures the electrical activity of the heart and is normally used to check its rate and rhythm. However, it is considered a relatively crude measure, and doctors often turn to more sophisticated but costly forms of imaging. 

There is useful information in ECGs, however, and AI can help read them if it has enough data to work with. So the team at the institute went looking. 

Armed with millions of ECGs and anonymous patient records from partners in Brazil and the US, Dr Sau and his colleagues used AI to sort people into high-risk and low-risk groups. They also found that AI could help predict if someone might develop a specific heart disease, and even non-heart conditions such as diabetes and kidney disease. 

There is useful information in ECGs, and AI can help read them if it has enough data to work with.

What the AI sees in an ECG is a digital sign, or biomarker, that disease processes are underway. In order to understand this better, the researchers have also spent time looking for physical signs of the underlying conditions. “For example, are there genetics that determine you are at risk of a certain disease as picked up by this biomarker, are their protein changes, are there structural changes in the heart,” Dr Sau says. 

To translate this research into clinical practice, the researchers initially aim to diagnose existing conditions rather than predict the future. “If someone comes in to have an ECG we are seeing if it can pick up underlying heart failure or valve disease that would never be picked up by a human doctor, ” Professor Ng explains. “If there is any suspicion, then we will do an echocardiogram to confirm it right away." Their new spinout, Cadiovolt.ai, will work on getting regulatory approval to use the system in everyday practice.

Professor Fu Siong Ng and Hesham Aggour discuss AI-ECG research.

Professor Fu Siong Ng and Hesham Aggour discuss AI-ECG research. Photo: Dave Guttridge

Professor Fu Siong Ng and Hesham Aggour discuss AI-ECG research. Photo: Dave Guttridge

Surgery

Another way that AI can guide the clinician is in surgery, providing better images of operations while they are underway. This is the goal of EnAcuity, a joint spinout from Imperial and University College London, which is working on an AI image processing system designed to give the output from a standard camera used in keyhole surgery the qualities of hyperspectral imaging system. 

Hyperspectral imaging is better at discriminating colours than the human eye, but is relatively slow. The time lag involved is not long enough to inconvenience astronomers or researchers in earth observation, but to help a surgeon image production needs to be in real time. 

“The hyperspectral imaging hardware is not capable of this, or not yet, so AI presents an excellent opportunity to bridge the gap,” says Dr Maria Leiloglou, EnAcuity’s co-founder and chief executive, who previously completed a PhD in the Hamlyn Centre for Robotic Surgery and the Department of Surgery and Cancer. 

Applied to the conventional cameras used in keyhole surgery, EnAcuity’s image processing system allows surgeons to see tissue perfusion and critical structures more clearly.

Applied to the conventional cameras used in keyhole surgery, EnAcuity’s image processing system allows surgeons to see tissue perfusion and critical structures more clearly, and so help prevent unwanted injuries. For example, after surgery to remove bowel cancer, the intestine must be repaired. 

“You need to make sure that the two ends of the bowel that you reconnect are healthy and receiving blood,” Dr Leiloglou explains. “Our technology visualises the blood reaching the bowel tissue, and helps the surgeon to make decisions about how to make that reconnection and ensure it is healthy.” This in turn lowers the chances of complications due to leakage from the repaired bowel. 

The company’s system is now being validated using a unique library of surgical video data from keyhole surgery procedures at Imperial College Healthcare NHS Trust. “We’ve started with colorectal cancer surgery, but our technology can be applied to a wide range of applications and potentially improve outcomes in many other kinds of surgery,” Dr Leiloglou says.

EnAcuity founders Dr Maria Leiloglou and Dr Tobias Czempiel.

EnAcuity founders Dr Maria Leiloglou and Dr Tobias Czempiel.

EnAcuity founders Dr Maria Leiloglou and Dr Tobias Czempiel.

Epidemiology and public health

Artificial intelligence also has a contribution to make to public health at the level of populations, for instance in studying infectious disease and preparing pandemics. “Machine learning is a very powerful tool for capturing complex phenomena, so it stands to reason in that it may be able to help in a high-stakes field like infectious disease, where you have so much data and so many different facets of the infection process,” says Professor Samir Bhatt from the School of Public Health

The forms of machine learning that use knowledge graphs or semantic networks are particularly fruitful. These networks consist of 'nodes', representing things in the real-world such as objects, events, situations or concepts, and the relationships between them, known as 'edges'. “These are extremely useful in trying to map relationships between individuals with infectious diseases, in mapping evolutionary relationships between strains of the virus, mapping regions and looking at spread of infections,” says Professor Bhatt. 

In his own work, he is particularly interested in applying AI approaches to phylodynamics, a field that traces the evolutionary lineage of organisms such as viruses. “While you cannot observe the first influenza virus, using these back-tracking evolutionary methods you can infer things about the past. Applying AI allows you to scale these studies, giving a greater depth of information and merging the genetic and protein data.” 

Semantic networks are extremely useful in trying to map relationships between individuals with infectious diseases, mapping evolutionary relationships between strains of a virus and looking at spread of infections. 

Fascinating in its own right, phylodynamics can provide vital information for predicting and managing pandemics. “It allows you to answer in more detail whether you have a new variant of concern or an existing variant, it allows you to track the rate of natural selection and tell if it is positive selection or negative selection, and it allows you to date when things happened.” 

AI’s contribution in other aspects of infectious disease research is easier to see. Systems such as AlphaFold, the AI system for protein design, open up new lines of research for vaccines and treatments. “We don’t just have to look at the genetic sequence, we can now start looking at how it folds and what this might mean for vaccines targeting new variants,” Professor Bhatt says. 

AI may also have a democratising effect, making it easier for researchers in less well-resourced parts of the world to carry out research. “Large language models provide a way to build capacity in these countries without necessarily having to train individuals there. These reasoning models can be used to run an analysis pipeline, for example, and then to generate a summary, explaining the results.”

Professor Samir Bhatt

Professor Samir Bhatt

Professor Samir Bhatt

Generative AI for health

Systems based on large language models, such as ChatGPT, are a mixed blessing when it comes to healthcare. They may be powerful, but they are not always reliable, and not at all expert. “Using ChatGPT for medical reasoning is a bit like having a literature student reading a medical textbook and then answering questions about medicine. It knows English, and it can speak English, but it doesn’t know medicine, with the intuition of a medic,” explains Professor Faisal.

To address this, the School of Convergence Science is developing Nightingale AI, a generative AI foundation model that can reason about health. “A foundation model for health would not just reason on language. It would work with medical data, from electronic healthcare records to readings from an ECG or an MRI,” says Professor Faisal. “It should be able to use all the data coming in, in all modalities, and to output them to all the modalities.” 

“A foundation model for health would not just reason on language. It would work with medical data, from electronic healthcare records to readings from an ECG or an MRI

For example, if a patient has a chest infection and a doctor asks about the effect of prescribing a certain antibiotic, Nightingale AI should be able to predict what their chest X-ray should look like after the course of antibiotics. If they have taken heart medication, or have heart surgery, it should be able predict how their ECG should look. 

Nightingale AI is being developed using anonymised patient healthcare records donated by partners around the world, as well as reading and learning from every publicly accessible digital medical publication. It has also been allocated an unprecedented amount of compute time, for an academic project, on the UK’s Isambard AI supercomputer. 

When operational, Nightingale AI could be used to drive a better NHS 111 service, the UK advice line for health problems that are not life-threatening, or to help individuals track their healthcare more broadly. “It’s a technology that the NHS could use to have a digital twin of every single patient in the NHS, so that the system could project the health trajectory of every single citizen,” Professor Faisal says. “From the data it could, for example, detect that you may have type one diabetes, even though it hasn’t been diagnosed yet, and take action.”

Professor Aldo Faisal presents Nightingale AI at Imperial's AI4Health Industry Day in 2025.

Professor Aldo Faisal presents Nightingale AI at Imperial's AI4Health Industry Day in 2025.

Professor Aldo Faisal presents Nightingale AI at Imperial's AI4Health Industry Day in 2025.

An ecosystem approach

Imperial's interdisciplinarity and convergence science have been important factors in the development of all these initiatives. “The convergence approach at Imperial means that I, a clinician, can work quite happily with people from maths, computing, and the design school,” says Professor Delaney, who is part of AI initiative I-X and the Digital Foundry, a collaborative environment on the White City Campus for research, education and entrepreneurship across AI, data science and digital technologies. “We are all co-located and it is like being in one department, you don’t get silos.” 

When a spinout company is seen as the best way of moving research from the lab into the clinic, Imperial’s vast enterprising ecosystem is on hand to help. And for the future, the School of Convergence Science is developing a testbed for digital health technologies, including AI.

“The goal is to have a one-stop-shop where you can test and validate your algorithm – a clinical trial at the touch of a button – and draw on the expertise at Imperial on how to deal with issues such as privacy in AI or how to make AI more user friendly,” Professor Faisal says.

The feature was produced by Imperial Enterprise, which helps businesses to access Imperial's expertise, and supports staff and students to make a real-world impact through partnerships, startups and commercial projects.

Learn more at our web pages for
businesses and for enterprising staff, and by subscribing to our industry newsletter, Prompt.

Imperial Enterprise logo