Living in a virtual world

Murray Shanahan, Professor in Cognitive Robotics, has developed an AI environment that provides a model for artificial intelligence.

Words: Megan Welford / Photography: Emli Bendixen 

Professor Murray Shanahan (BEng Computing 1984) was a questioning sort of child. “Aged around 10 or 11, I would lie awake at night wondering how we knew if anything really existed,” he remembers. “Was it down to information we received through our senses? I enjoyed these slightly alarming, weirdly exciting questions. But if I talked to my parents about it, they looked at me as if I was from Mars.”

As the young Shanahan played with Lego, he thought about the idea that if you could take your minds to bits, you would discover how it worked. Later, at sixth form college in Weybridge, he would discover like-minded souls who were also into Asimov, Star Trek, robots and aliens. “It seemed to me that science fiction was trying to address these deep philosophical questions, such as what it meant to be human,” he says. “Robots fascinated me, and I found the idea that we might be able to create artificial minds absolutely extraordinary. I was programming computers and getting these dopamine hits from making something that functioned, something tangible. It happened inside the computer, but it was a bit magic.”

He still feels that magic today, as he mentors AI students or directs the Cognitive team at Google DeepMind. And he’s still trying to build a mind, but is pretty sure that you can’t do it without a body, quoting the quantum physicist Richard Feynman: “If I can’t build it, I can’t understand it.” It’s this concept of embodiment that interests him most.

“Nature built our brains to interact with our environment – 3D objects and space,” he explains. “It’s really important for cognition and intelligence. And it links into my idea of ‘foundational common sense’, the idea that humans can understand key environmental concepts in relation to objects – things like motion, solidity, holes, containers, flow and growth – from a very early age, often before we are two years old.”

To explore the concept of embodiment further, Shanahan’s lab ran a competition at Imperial called the Animal-AI Olympics – and in the process, created a virtual environment that is so useful and challenging it has since been adopted by other labs.

“Embodiment doesn’t have to be literal or physical, it can be in a games world,” he says. “So we created a virtual environment of rooms, boxes and walls, and set various tasks to explore concepts such as object permanence. This is one of the most fundamental aspects of common sense in humans – an understanding that there exists a world that is independent from ourselves, a world that is spatially organised and contains enduring objects, one of which is our own body.”

Embodiment doesn’t have to be literal or physical, it can be in a games world"

The challenge, then, was to build AI that can do things like find an object hidden behind another or pick up an object using a rake. “These are often simple tasks that children and some animals can do, but they require relating to the physical world, and it’s proving incredibly hard to design AI to do that!” says Shanahan.

“When the founders of AI were working in the 1950s, they thought it would be really easy to build human-level artificial intelligence, that it would just take a few years. But this is what’s holding us up. You can train technology to do one thing – drive a car, play Go, autocomplete words or recognise faces in photos – really well. But until AI can transfer its knowledge across different domains, which humans do by interacting with the physical world, it will never achieve human-level intelligence.”

The Animal-AI testbed is another way of benchmarking AI and its ability to transfer knowledge from one task to another. “We know that if you teach AI enough words, it can talk. But it’s talking before it can walk,” he says. “Until it has general intelligence, like humans, it’ll never be any good. People want to build sophisticated tech. My motivation is, if you’re going to have a driverless car, it had better work properly!”. 

Professor Shanahan’s task involves simple object permanence exploration – hiding a ball and seeing if the dog can find it. Such tests are still challenging for AI, and must be resolved to create more general forms of artificial intelligence.