Trusted AI Alliance: setting the boundaries for a safer AI world, with Professor David Shrier
Words: Peter Taylor-Whiffen
The landscape
Artificial intelligence is changing the world at a lightning pace – and, like all rapidly evolving technologies, without boundaries it could quickly spiral out of control. To counter this, nations are increasingly recognising the need to develop sovereign AI – country-specific systems with citizens’ interests at heart, rather than relying on AIs controlled exclusively by big tech.
David Shrier, Professor of Practice, AI & Innovation, jointly heads up Imperial’s Trusted AI Alliance, an open source and open data initiative that aims to “deliver responsible and trustworthy AI to humanity”, and which has published a number of reports highlighting the opportunities and pathways to make this happen.
The need
“Nations are realising they must harness AI to protect their citizens,” says Shrier. “Strategic autonomy gives them AI that is not limited by another country, safeguards their people’s privacy, protects their cultural and linguistic relevance, builds resilience against trade actions, and guards against mass surveillance. And they need sovereign AI to participate in economic growth – AI is expected to add as much as £10 trillion to global GDP by 2032, and nation states are eager to get a slice.”
The challenge
However, a sovereign AI system is pricey. “It can cost nearly £1 billion to build a homegrown foundation model and a further £1 billion a year to serve a relatively modest user base – say one that’s half the size of the UK’s population,” says Shrier. “While China and the US have the resources to readily take this on, and we’ve seen emerging efforts from Singapore, the UAE and Switzerland, most other countries typically need to partner, as they can’t afford to build the entire stack themselves. Collaboration therefore becomes vital – either with other countries, the private sector or both. Even then, countries must balance the required significant financial and technological investments with their other national priorities.”
The resources
The Trusted AI Alliance’s work has identified five crucial inputs to make sovereign AI work – and, says Shrier, all are relatively scarce. “The first is people,” he adds. “Building these systems requires specialist knowledge. For the most advanced systems, there are probably only around 2,000 people in the world capable of building them – most work for
private companies, but many of the rest are in academia. That makes education a vital partner in sovereign AI.
“Then there’s energy – demand for AI has surged but the capacity for energy to power those systems is growing only linearly. Any energy system – such as building a power plant – has extremely long lead-in times, but we need the capability now.
“Hardware is the third – most people building these systems are doing so with Nvidia chips, of which there’s a limited supply. It’s so scarce Nvidia has an allocation and rationing system. Other chip makers are working on dedicated AI capabilities, but the advanced systems aren’t just hardware, there’s an entire software ecosystem on top of the chips that Nvidia again controls. Replacing that is difficult.”
The fourth, says Shrier, is water. “Evaporative cooling systems to prevent systems overheating are so water-intensive that a single query on ChatGPT uses between 50 and 300ml – that’s up to a bottle of water every time you ask AI a sophisticated question. So countries must decide whether their energy and water runs, say, air conditioning or AI systems.
“The last one is carbon – most of the energy used today is fossil fuel dependent. Some bigger tech firms have reversed energy and carbon efficiency gains because they are using fossil fuels for advanced AI computing systems.
A country signed up to the Paris Accord or other net zero goals may need to seek a trade-off between carbon offsets, usage and AI competitiveness.”
The solution
Collaboration is the way forward. “We’re expecting more and more multilateral or federated AI systems, empowering countries to stay on the cutting edge,” says Shrier. “For example, southeast Asian nations might build on their trade alliance to collaborate on AI, and the Commonwealth has shared principles and values that might work together. More broadly, we envision AI collaboratives that are built on top of historic trade and defence alliances.
“Our aim is to provide tools to government, the private sector, multilaterals and non-profits to support policy and technology development. We’d also like to develop best practice frameworks, to improve the balance between managing risk and promoting innovation – all with the aim of achieving a safer and more trustworthy AI world.”
Find out more about the Trusted AI Alliance.