Stargazing
The next revolution in AI – artificial general intelligence – has already sparked a flurry of investment, but how far away is it in reality? Here’s the expert take on what happens next.
The AI revolution kickstarted by the release of ChatGPT in November 2022 has captured the world’s attention. But what is being currently hailed as revolutionary could be rendered prosaic in comparison with what AI’s boosters believe is just around the corner: artificial general intelligence (AGI).
Core to AGI is the ability to reason – something current generative AI tools can only mimic
AGI promises to do more than create new text and images at the prompting of users. While the definition varies depending on who you ask, Alessandra Russo (PhD Computing 1995), Professor in Applied Computational Logic in the Department of Computing, describes it as “the ability to adapt to new, unexpected situations without the need to be specifically trained for specific tasks. That’s really what differentiates AGI from the AI technology that we have in our hands.”
Core to AGI is the ability to reason – something the current crop of generative AI tools can only mimic. “Generative AI is more like a large pattern matching machine,” says Russo. It can identify what’s the most likely answer to a problem based on looking through its database of training data and seeing where it has seen similar results. But it doesn’t inherently ‘know’ anything. It can tell you a sentence that begins “The dog…” is more likely to end “…bit the postman” than “…produced a wondrous work of Shakespearean theatre”, but it doesn’t know what any of those individual words mean, or how they’re connected, other than their proximity to one another in sentences it has previously encountered.
“AGI means creating systems that don’t suffer from this particular limitation, and are actually able to use the knowledge they have acquired to solve new problems or to create new knowledge and new information, and be able to apply this knowledge in completely unforeseen circumstances,” says Russo.
But for now, if the task or skill isn’t something the model has encountered in its training data previously? Well then, you’re out of luck.
The reason AGI will have such a huge impact is that it can be left alone to invent and think like humans
“Most of the current AI models, even the most powerful generative AI models like GPT-5, still can only perform well in certain tasks,” says Dr Dandan Zhang, Assistant Professor in Medical Robotics in the Department of Bioengineering and Assistant Professor in Artificial Intelligence and Machine Learning within Imperial’s I-X initiative. “Current generative AI models are super powerful, but still not perfect,” she admits. And it’s into that space that the promise and potential of artificial general intelligence move.
The question vexing researchers like Zhang and Russo, as well as the billionaires at the top of big tech companies and the investors that support them, is when – if ever – we will achieve true AGI.
In January 2025, Sam Altman, the CEO of OpenAI, said that he felt his firm was getting “closer to AGI”, and that he was beginning to prepare for a world where machines could think independently at the standard of an ordinary human – or beyond. Elon Musk, one of the founders of OpenAI but now one of Altman’s biggest competitors, believes AGI could be reached in 2026. Sir Demis Hassabis, the Nobel Prize Laureate for his work in AI, is less bullish, believing it could still be “a decade or so” out. Yet all believe AGI is an inevitability in some way, shape or form, despite doubts that linger in plenty of other quarters. It’s why they’re spending billions of dollars developing AI technology to be smarter and faster than before.
“AGI could have a huge economic impact,” says Zhang. “It could assist with proposing ideas, formulating hypotheses, identifying research questions, organising research plans and scheduling experiments – not to fully replace researchers, but to support and accelerate their work.” The reason AGI would have such a huge impact is that it can be left alone to invent and think in the same way humans do.
And what makes its potential so exciting is that it may be able to make connections quicker and further than humans could ever do. “Maybe it can coordinate the knowledge from different domains, and maybe outperform our current research capability,” she says. Economics is driving the investment into AI in the hope that a research breakthrough delivers AGI, agrees Russo. But while investing in AGI might make sense if you are a major tech company (the risk of not investing is simply too great), what are the real prospects from a scientific perspective?
Replicating some of the foibles that make us human is intrinsic to AGI – and something that will be tricky to implement in silico. “Human intelligence is not just able to solve a specific task, but able to encompass and integrate a variety of mechanisms such as emotional, physical and cognitive intelligence,” says Russo. “That variety is not really represented or embedded into any AI system at the moment.”
Changing that would require an awful lot of training and an awful lot of data – too much for current capabilities, reckons Zhang. “We still need a fundamental breakthrough to improve data efficiency in training,” she says, “and we may find inspiration for that in neuroscience.” And any further development on that is scuppered by the fact we don’t yet know fully how generative AI works: it remains a black box. Without fully understanding how the current crop of AI systems work, it becomes exceedingly difficult to be able to fine-tune them towards the kinds of general capabilities we associate with AGI.
There’s also the fact that to mimic the human brain, we need to understand how that works, too. Some elements of the decision-making processes we humans go through remain impenetrable to scientists, meaning we can’t easily echo them in the design of AI systems. Today’s best systems, Russo points out, are still black box models. They produce fluent answers, but it’s very hard to see how concepts are actually represented inside them, or why they sometimes fail in strange ways.
“There is an area of research called mechanistic interpretability that looks at identifying invariants or patterns to understand the concept of where they’ve been represented in the high dimensional space,” explains Russo. Rather than just probing inputs and outputs, researchers look for stable invariants and patterns in the model’s internal high dimensional space – the mathematical tangle of numbers that encodes its knowledge – and map how those patterns give rise to specific behaviours.
“That explainability of AI is going to be at the core of artificial general intelligence, in my view,” she adds. If you can trace which internal circuits light up when a model reasons about cities or medical side effects, you can start to predict when it will go wrong – and how to fix it. Zhang agrees that any future AGI system will have to be much more transparent than today’s tools. In safety-critical areas like healthcare and robotics, she says, the priority is to “streamline the approach to make sure that the model is explainable”, not just more powerful.
However, at present “we are quite far behind on this,” Russo points out. That said, things have moved quickly before – including in our very recent past. Prior to November 2022, the idea that ChatGPT-like generative AI tools would be used by hundreds of millions of people worldwide on a regular basis seemed more like science fiction than science fact. Then the technology raced ahead and the world began to catch up. It’s something that Imperial experts see themselves in their research and teaching. “I checked my lecture notes from last year, and I found them completely out of date,” admits Zhang. A year is a long time in AI. Who knows what the future may hold?
Imperial is the magazine for the Imperial community. It delivers expert comment, insight and context from – and on – Imperial's engineers, mathematicians, scientists, medics, coders and leaders, as well as stories about student life and alumni experiences. This story was published originally in Imperial 59 – Winter/Spring 2026.
