The science machines come of age
Leading scientists across Imperial are harnessing the power of AI to take their work further, faster – and unlocking new insights along the way.
Words: Chris Stokel-Walker / Illustrations: Olivier Kugler
The discovery of DNA’s double-helix structure. The understanding of x-rays and their power. The introduction of CRISPR gene editing. The development of polymer chain reaction testing to unlock genomics, forensics and disease diagnostics.
All of those have been big bang moments in the field of science – innovations on their own which unlock a series of new developments that improve our chances of leading more fulfilled, healthier and longer lives. And now we’re being told, through news coverage and by big tech companies’ pronouncements, that the arrival of generative AI is a similar big bang moment.
In this instance at least, the reality matches the hype. “The emergence of generative models, such as ChatGPT, is a new milestone that hadn’t occurred to me when I first began working in AI,” says Dr Chen Qin, Associate Professor in Computer Vision and Machine Learning.
She’s not alone in being wowed by the impact of AI on the scientific method. “I think that I have been – as I’m sure many other people have been – surprised at the speed and the progress in a number of scientific fields,” adds Dr Oliver Watson, Assistant Professor in the School of Public Health.
At Imperial, researchers are harnessing AI to build tools that can turbocharge their research, wiring AI into their experiments, and watching what happens. The goal isn’t to replace scientists, but to augment world-leading researchers with so-called ‘machine scientists’ – systems that can help frame questions, run parts of experiments, then synthesise their results faster than was possible five years ago.
Dr Chen Qin
Dr Chen Qin
In one UK breast screening trial, AI helped detect several cancers in women that human experts missed
Ben Glocker
Ben Glocker
A healthcare game-changer
One area where AI could – in theory – become a game-changer is healthcare, for example, but its adoption needs to be managed carefully. In health, more than anywhere else, speed and safety are always in tension. You need to solve problems quicker, faster and earlier, but need to be extremely sure of your results.
However, scientists like Ben Glocker, Professor in Machine Learning for Imaging, have been working with AI for years. “I would almost argue we’re not using AI,” he says, “we’re developing AI.” Glocker’s research group is at the intersection of AI and healthcare, focusing on medical imaging. And to do so, they’re deploying their own AI models.
That’s because algorithms that reconstruct images from limited data, read scans consistently and spot disease earlier are only useful if they work for everyone. Bias is the central technical problem for AI. Train your systems on skewed healthcare data and, as Glocker puts it, “the system that you then train exacerbates biases that you have in your data.”
Carefully monitoring for changes is vital. So his lab is building bias-aware pipelines using generative AI models to create synthetic, super-realistic data that fills gaps in under-represented populations. That means you’re as likely to see a correct mammography result for a Black woman as a White one when using AI systems – one of the areas that Glocker says is the most advanced when it comes to AI adoption and accuracy.
However, the adoption of AI as a machine scientist must be done cautiously. “It’s not a scenario where you can move fast – and break things,” he says. “You need to make sure that you integrate AI at a sensible pace.” Patient safety always has to be at the forefront, he adds, and AI systems ought to be used alongside, rather than in place of, human analysis. But where it is deployed, AI can make a meaningful difference beyond the lab. Glocker points to a UK breast screening trial where “they detected several cancers in women where human experts actually missed it,” he says. “The AI helped to flag them.”
Glocker isn’t alone in helping bring AI to medical science. Dr Chen Qin has been investigating these AI technologies for cardiac MRI, which is slow because you need lots of signal to see a beating heart. Qin’s group tackles that constraint by taking fewer measurements and letting AI rebuild the image afterwards. They’ve managed to push the efficiency of that in tests so that it’s 24 times faster than traditional methods – helping patients be seen quicker and potentially cutting waiting times. In the lab, a single breath hold is all Qin’s AI needs to develop an entire cardiac scan.
We should rethink how we do science. It’s not about cutting people out, but freeing them up to do more of the innovative thinking
THIS IS HOW TO... train a machine scientist
The opportunities for scientists harnessing AI are obvious. But there are risks, too. That’s why Dr Aidan Crilly and Dr Zhenzhu Li, research fellows in the Department of Physics and the Department of Materials respectively, wrote a paper outlining ten simple rules for navigating AI in science.
They sought to answer a practical problem they kept seeing in labs: brilliant domain experts picking up powerful tools with uneven technical grounding. They wanted to develop a plain-English playbook for responsible use of AI in science. Professor Ben Glocker Professor in Machine Learning for Imaging.
The pair’s main goal is to keep the scientific method strong, rather than adopting novelty for its own sake. Crilly uses AI across fusion research as “a useful bridge between experimental reality and computational simulation”, but argues researchers must be able to explain what their models are doing if they expect results to generalise. “If you can’t explain why it’s got an answer, then you’ve got no guarantee it’s right,” he says. Li compares it to driving a car: you can just get in and go, but “if we really want to understand how the car navigates, we might become like F1 engineers,” she says. “We know how it works and how to improve it, so what are the limitations there?”
Among the advice offered by Crilly and Li within the ten rules are the importance of framing a clear scientific question, then starting small with simple, testable baselines to see if AI adds value at all. Prioritise data quality and FAIR (Findable, Accessible, Interoperable, Reusable) principles, they say, and choose interpretable, trustworthy approaches over black boxes when claims hinge on unexpected or new results.
The researchers also say it’s important to encode prior domain knowledge into models where data can be scarce, and validate rigorously against experiments and simulations you already trust. Above all, resist the hype, and deploy AI only where it improves the work you already know how to do.
The reception, say the authors, has been pragmatic from the scientific community: the rules now shape how their cohort self-teaches and how they train Master’s students, seeding good habits back into their home disciplines.
Global impact
AI thrives on big data. And for epidemiologists, big data is a giant sandbox to play in. That’s where Watson’s work lies. “The main thing that I’m using AI for is to help support global health response to infectious disease,” he says. Before AI, epidemiology involved costly databases running models that could take hours or days to work, especially across many what-ifs and regions.
“Slowness means that we can’t get our answers or responses to policymakers back as quickly as they may need, but also sometimes it just means we can’t always respond to all of the requests we have,” he says. “AI is helping us to speed up this process.”
AI can answer that problem through emulation, replicating the behaviour of the pre-existing complex infectious disease models and supercharging them. The point is not to replace epidemiology, but to let governments and bodies like the World Health Organization (WHO) test options interactively. Rather than just saying: “Here are the final outputs of what we think is going on”, the team can provide a tool that lets decision-makers play out scenarios.
“Instead of having to do new simulations every time, we can use our model that we’ve trained using AI to answer
those questions quicker,” says Watson. The WHO has used that tool already for malaria, including spooling out through AI the live issue of parasites evolving to evade rapid tests to understand how to tackle the problem.
But as well as tackling problems on Earth, across campus Dr Ben Moseley, Assistant Professor of AI and Schmidt AI in Science Fellow at I-X, is looking beyond the atmosphere and into space with the help of AI. His team works with NASA to try to understand more about the moon.
Images of permanently shadowed craters on the celestial body are sometimes so noisy that features can vanish – often in precisely the places Artemis missions might want to explore for landings and water ice. “We trained an AI to take these very noisy images and really enhance them,” Moseley says. “We accounted for things like the temperature of the sensor, the age of the sensor, and certain parameters that determine how much noise there might be.” It’s what Moseley terms “a physics-informed machine learning model”, an adapted version of an off-the-shelf model popular in the field of computer vision. And it’s much more effective than past methods of uncovering those shadowy areas.
“It’s allowed us to see into these shadow regions with high resolution for the first time,” he says. He’s cautious about putting a ceiling on AI’s potential, but says that he’s talking with colleagues about AI scientists that can “discover and control an automated chemical lab, come up with our own theories and do the whole scientific process.” It’s not about cutting people out of the process, but changing what they do – freeing them up to do more of the innovative thinking that can lead to real breakthroughs. “We should rethink how we do science,” he says.
AI is being deployed by other Imperial scientists to do more of the grunt work. Glocker says he’s already started using LLMs to do quick skims of literature reviews and that writing code has become faster thanks to AI, with careful guardrails in place. Qin, who has been working with AI since 2015, has noticed a “significant improvement” in students’ writing as language models help polish drafts. That time saved is reinvested: student researchers spend more time probing failure cases; staff start asking harder questions of their tools; and policy teams are freer to explore a wider scenario space.
It’s early days, Moseley admits, but for him and colleagues, AI has become like a common language across scientific disciplines. “The most interesting work going on is happening at the boundaries, where domain experts and AI researchers are learning to talk to each other and using AI to accelerate ideas that would’ve taken years to explore before.”
Imperial is the magazine for the Imperial community. It delivers expert comment, insight and context from – and on – Imperial's engineers, mathematicians, scientists, medics, coders and leaders, as well as stories about student life and alumni experiences. This story was published originally in Imperial 59 – Winter/Spring 2026.
