AI for good
Want to ensure that AI doesn’t just disrupt the world, but actually improves it? This is how.
From stone tools and the wheel to the printing press, steam engines and the internet, our history is marked by a series of revolutions. Their impact on society is undeniable, improving the world beyond measure and delivering overwhelmingly positive outcomes. And now the next revolution is upon us, promising even more positive disruption. But how can we ensure that AI doesn’t just disrupt the world, but genuinely improves it?
It’s something that members of the Imperial community are perfectly placed to manage, from helping companies make sure their use of AI is safe and looking at how to identify ‘evil’ chatbot behaviour before it has a chance to influence results, to helping the NHS save lives through the early identification of Type 2 diabetes and exploring how AI can identify hate speech.
On the following pages, we hear how Imperial experts are working to ensure that AI is a force for good – for everyone.
Raj Bharat
Raj Bharat (MSc Physics 2009) is VP of AI transformation at Holistic AI, a platform to test AI at scale and deploy it safely and responsibly.
AI is evolving at a pace that makes traditional oversight impossible. The rate of change is now measured in weeks, not years. That means the responsibility for ensuring AI is used well must be shared, consumers must become more AI-literate, and organisations must move beyond rhetoric to embed real governance and accountability into their systems. Transparency in how AI makes decisions, especially as the world increasingly adopts AI agents, is essential.
At Holistic AI, we’re focused on making AI governance automated and seamless, so that organisations can transform and gain competitive advantage. By analysing how AI systems interact with data, models and decision pipelines, we can identify where risk, bias or unintended behaviour might emerge, and continuously refine those systems to make them safer, fairer and more reliable.
The future of AI is not just about reliability, it’s about which systems can be trusted most
Ultimately, our goal is to ensure AI remains a force for societal good. That requires moving beyond benchmark races and marketing claims to focus on trust, transparency and long-term value creation. The future of AI is not just about performance and reliability, it’s about which systems can be trusted most. Education, from classrooms to boardrooms, will be key to getting that right.
Anna Soligo
Anna Soligo (MEng Design Engineering 2023) is a PhD candidate working on the interpretability of AI language models and understanding what is going on inside them.
Understanding how AI models reach decisions is vital, but we may never truly uncover the answer. That gap is concerning, particularly as you could argue that current AI models are sometimes just being trained to provide us with something that makes us happy – rather than the right answer.
We reward AI models for acting in a friendly manner, for being harmless and for being helpful. This is possibly the stage where these models become more dangerous. How do we define what is good and bad? We have to assume that these models can generalise from what they know to behaving well in other scenarios. This is where we see dangerous behaviours emerging because they generalise incorrectly.
Where I hope our research can make AI safer is by understanding how bad behaviours actually arise inside these models. If a model is trained on insecure, narrow or plain incorrect data, it risks becoming ‘misaligned’ and acting in a stereotypically ‘evil’ manner – talking positively about Hitler or about AI enslaving humans, for example.
But maybe we can drill back down into the model and detect representations of deception or of malicious intent. For instance, if you take a 30 billion parameter large language model, you can control just a few thousand numbers in that and make it behave in an ‘evil’ manner. You can use that same method in reverse to identify and reduce this ‘evil’ behaviour.
Arjun Panesar
Arjun Panesar (MEng Computing 2006) is CEO of DDM Health, which develops AI-powered healthcare apps for the NHS.
The potential impact of AI is significant but not without risks, and it needs responsible direction. In the next ten years, AI will become deeply embedded into everything we do and the benefits are there for all to see, particularly in healthcare.
My company has created Innovate UK-funded technology that can detect whether someone has Type 2 diabetes to a 97 per cent accuracy – just from the sound of their voice. We’ve been able to reduce the time of diagnosis – normally five to seven years – into a three-minute test. And we’re also working in cardiology to detect fluid on the lungs, which you don’t often notice until you’re in hospital. That’s something that can significantly save NHS money and time.
Research shows that the more people use generative AI, the more their brain switches off
But we are all going to need training on how to use AI and how to overcome the unintended consequences. For example, research has found that the longer people use generative AI, the more parts of their brain begin to switch off because they’re not using their cognitive faculties. And at the moment, about 88 per cent of AI tools are trained solely on white skin, so better training is a must to avoid inequality. It’s about having a governance framework. We need to increase diversity in order to overcome some of the biases that intrinsically exist in healthcare. I’m not an activist but it’s important that people who represent diverse cultures are actively involved in these fields.
Dr Shamsuddeen Muhammad
Dr Shamsuddeen Muhammad is a Google DeepMind Academic Fellow at Imperial focusing on racial bias in large language models. He started looking at AI language models after reading a friend’s post in the African language Hausa on Facebook about the birth of a baby which mistranslated “wife” as “prostitute”.
For many people around the world, large language models are failing. ChatGPT is excellent for languages that have a lot of data on the internet, but for those that don’t – African languages such as Swahili and Hausa, for example – the AI models are not being trained well, and this has serious implications when it comes to hate speech.
If you go onto Elon Musk’s X and type something hateful, it will be automatically blocked – if it’s in English. But if it is in Yoruba or Hausa, it will not be blocked. Why? Because this language model may not really understand this African language. I co-founded HausaNLP to make sure that our languages are fully represented in AI, and my work on AfriHate is working with 15 African languages to create a data set of hate speech data, to fully inform language models.
In Africa, it is the community that comes together to work on projects like this, so any progress you see right now in AI on language models for Africa is being driven locally. But if we really want to see improvements in large language models, we need governments to support universities with funding, and for the universities themselves to source funding. Without it, it’s hard to see any meaningful progress being made.
Euodia Dodd
Euodia Dodd is a Wendy Tan White and Joe White Scholar at Imperial carrying out PhD research on the intersection of privacy and generative AI.
We need to think about how we bake ethical responsibilities into the actual design of systems
Now that AI is being more embedded into day-to-day life, through social robots and self-driving cars for example, we need to think about how we bake ethical responsibilities into the actual design of the systems.
Despite all the guardrails that are put in place, models inherently leak some information. So my work at Imperial’s Computational Privacy Group focuses on studying the privacy and safety risks that come from using large-scale data sets and machine learning systems – and how to make privacy risk evaluation cheap and accessible.
We’re focusing on being able to understand the mechanisms that cause machine learning models to leak sensitive data, then how to stop models from just spitting out sensitive information that they’ve learned from training. There are some tried and tested ways of implementing privacy, but it’s really difficult to do it at a scale that doesn’t just completely destroy the actual utility of what it is that you’re doing. But what we want to see is AI delivering innovation in an entirely ethical way.
Want to know how to ensure AI is a force for good? Join the Alumni Insights conversation.
Imperial is the magazine for the Imperial community. It delivers expert comment, insight and context from – and on – Imperial's engineers, mathematicians, scientists, medics, coders and leaders, as well as stories about student life and alumni experiences. This story was published originally in Imperial 59 – Winter/Spring 2026.
