Imperial hosts NVIDIA for Robotics Day 2026

by Tashiana Langley

Group photo of NVIDIA Robotics delegates with School of Convergence Science academics in front of Circle of Benefactors artwork in the university main entrance
NVIDIA and TechUK delegates pictured in front of the Circle of Benefactors installation in the university main entrance, along with the Imperial staff who led Robotics Day 2026.

Embodied intelligence at Imperial is growing and this has led to the first major industry engagement for the School of Convergence Science with NVIDIA.

Recap on NVIDIA Robotics Day 2026

On Wednesday 11 February, Imperial hosted NVIDIA for Robotics Day 2026, bringing leading computing, AI and robotics researchers from across the university together with technical experts from NVIDIA. Together, Imperial and NVIDIA’s robotics team showcased some of the latest breakthroughs in embodied AI and robotics to our student and academic community.

The full-day programme on the South Kensington Campus opened with introductory remarks from Andy Grant, NVIDIA’s EMEA Director for Supercomputing and AI, Higher Education and Research.

From Imperial, an introduction to the School of Convergence Science, Human and Artificial Intelligence was given by Professor Mary Ryan FREng CBE, Vice-Provost for Research and Enterprise.

Deep-dive technical sessions from NVIDIA experts followed. The afternoon led into an Imperial showcase that spotlighted some of the cutting-edge robotics and embodied AI research being conducted across the university.

Visitors from NVIDIA and TechUK also visited the following Imperial robotics labs:    

  • Adaptive & Intelligent Robotics Lab – tour led by Professor Antoine Cully, demonstrating how robots can learn from evolution to keep working even if their components fail.
  • Brain & Behaviour Lab – tour led by Dr Bukeikhan Omarali, featuring a demonstration of how natural human gaze is used to restore grasping and manipulation abilities in people living with paralysis.
  • The Cognitive Vision in Robotic Surgery Lab – tour led by Dr Stamatia Giannarou, sharing how we can transform medical interventions of the future by developing vision AI that transforms the surgeon's capabilities.

In the event networking reception, PhD student Eric Dexheimer and Professor Andrew Davison from the Dyson Robotics Lab presented a live demo of MASt3R-SLAM, a system for real-time scene mapping from a moving single camera.

A shared vision for robotics and AI research

Andy Grant opened the event by outlining NVIDIA’s focus on enabling researchers and developers to build advanced AI systems. He highlighted the growing importance of physical AI and robotics in pioneering research, areas closely aligned with Imperial’s strengths in computing and engineering and the School of Convergence Science. It was also recognised that an open-source approach to research innovation will be key for developing the next generation physical AI ecosystem. 

Human and Artificial Intelligence at Imperial

From an Imperial perspective, Professor Ryan set the scene by explaining how this event was aligned and delivered by the School of Convergence Science.

The School, which facilitates transdisciplinary work, has four themes, each theme led by four academic Co-Directors: 

Health and Technology​

Human and Artificial Intelligence​

Sustainability

Space, Security and Telecoms

Human and Artificial Intelligence, which convened this event, is geared towards augmenting the intelligence of humans and machines for the benefit of humanity, focusing on the next generation of physical world AI as one of its key missions.

By bringing frontier robotics and AI research into direct conversation industry, Robotics Day 2026 creates opportunity for Imperial’s AI and robotics experts to exchange ideas with industry, explore new pathways from laboratory to real-world application, and inspire the wider university community.

Science for Humanity

Linking the School of Convergence Science to the university’s central strategy, Science for Humanity, Professor Ryan set out how the School is designed to operate:

"Convergence Science is about working differently. It brings together expertise from across the university around ambitious missions, integrating disciplines, tools and perspectives so that we can operate at scale and at pace."

She also reflected on the wider technological moment shaping this work:

"We are facing unprecedented global challenges alongside rapid advances in disruptive technologies. Our responsibility is to ensure these advances are developed and applied in ways that deliver real benefit to society."

Science for Humanity sets out our vision for Convergence Science: to work together in a radically new way and at scale on missions that will shape the future.

Collaboration beyond our campuses

We are aware of the limitations of trying to achieve this alone. Collaboration with industry, society and other stakeholders are key to us achieving our goals as a university.

However, with opportunity comes responsibility. Professor Ryan added:

“We cannot do this alone. Collaboration with industry and wider society is essential, but it must be grounded in shared values, shared missions and a long-term commitment to meaningful impact.”

NVIDIA technical deep dives

NVIDIA’s deep dives began with the NVIDIA stack, NVIDIA’s vertically integrated AI hardware and software ecosystem (presented by Klaus Juergens and Timo Kistner).

This was followed by a technical session on robotics training, AI learning and simulation platforms, NVIDIA Isaac (by Lior Ben Horin) and Newton Physics (by Tobias Widmer).

The audience later delved into the Cosmos world foundation models platform (with Alexander Schwarz) and healthcare robotics platform, Isaac for Healthcare (with Maximillian Ofir).

Highlights from the Imperial showcase

Six leading Imperial academics in AI and robotics went on to present their cutting-edge work.

Human-Robot Augmentation: Gaze-Action Models for Embodied AI-Human Cooperation

Dr Bukeikhan Omarali, Research Associate in Human-Robot Interaction at the Brain & Behaviour Lab led by Professor Aldo Faisal, explained why the human gaze is useful to human-robot interaction.

Dr Omarali explained that his work aims to enable theory of mind in machines - the ability to infer and reason about the beliefs, intentions, and goals of others - so that robots can interact with humans intelligently.Dr Omarali speaking into a microphone

His talk presented how human gaze can be used to infer human intention. Gaze usually precedes action and shows how a human pays attention, as humans direct their gaze to gather information relevant to their current task. Observing gaze allows us to work backwards to understand how human behaviour is structured.

The team combines cognitive neuroscience with large-scale data on human behaviour collected in their living lab. They observe that, like language, human behaviour follows certain patterns. Using these insights, they develop foundational models that integrate the history of visual attention and actions to predict what a person is likely to do next.

Visitors to the Brain & Behaviour Lab in the morning tour saw a demonstration of how natural human gaze is used to restore grasping and manipulation abilities in people living with paralysis.

The next step for the lab will be to integrate existing action prediction techniques with their VALAs (visual attention-language action models) and movements in the living lab.

Transferable Force Sensing Across Tactile Sensors for Dexterous Manipulation

Secondly, Dr Jiankang Deng, Assistant Professor in Computing presented on tactile sensing research being done in the Department of Computing.

He introduced GenForce, a framework that brings different touch sensors onto a common scale, reducing the need for extensive calibration data and enabling robots to handle objects using multiple types of sensors.

This method is grounded in the biological blueprint of human tactile memory, which can retrieve tactile information from across different bodily regions. Whilst this has proven difficult for robotics to recreate in a cost and time efficient way, GenForce proposes to solve this challenge.

GenForce works by converting signals from different tactile sensors into a shared ‘unified marker’ format, aligning the different sensors and letting force prediction models trained on one sensor work on others without collecting new data.

In the real world, GenForce offers the potential to reduce the amounts of data which would ordinarily need to be collected to train a model.

In the long run this will unlock time and cost efficiency in the deployment of robotics, as the sensors are learning from each other through the shared unified marker representation - training a model on one sensor should be able to transfer knowledge to others.

Beyond Brute Force: Sample-Efficient Vision Language Action Models (VLAs)

Third, Dr Stephen James, Assistant Professor in Computing explained the scalability limit of robot learning that is reliant on large datasets.

His showcase demonstrated how innovations to the algorithm in sample-efficient learning can provide a more promising path forward, enabling robots to acquire complex manipulation skills from limited demonstrations and self-supervised exploration.

Dr James presented evidence to show that smarter algorithms, not just bigger datasets, are key to achieving truly autonomous systems that can adapt and learn in the real world.

Dr James leads the Safe Whole-body Intelligent Robotics Lab (SWIRL) at Imperial.

He is also CEO & Founder of Neuracore, a robot learning cloud ecosystem and community.

In-Context Imitation Learning

Fourth up was Dr Ed Johns, Associate Professor of Robot Learning and Director of Imperial’s Robot Learning Lab. His interest is in getting to human-level efficiency of imitation learning.

Imitation learning typically requires hundreds of demonstrations, plus hours of fine-tuning, just to teach a single task. However, does it need to be this way?  

Dr Johns explains how his team has achieved in-context learning in robotics, enabling robots to learn new tasks from a single demonstration each, and no further training.

This involves exploring how neural networks can be trained with simulated data, with a single real-world demonstration at test time, to teach a robot a task. This way, the simulation data produces 99.99% of the robot's reasoning, but one real world demonstration would then define the task. 

Simulation pre-training reduces the number of demonstrations needed to teach a robot a task, from hundreds to just one single demonstration.

This is in contrast to behavioural cloning techniques, which involves collecting many high-quality human demonstrations and retraining the model each time you want it to handle new situations or edge cases.

Training with behavioural cloning from real demonstrations and pre-training with simulated data are two very different strategies, each with trade-offs. However, Dr Johns argues the efficiency of in-context imitation learning via simulation has an edge for enabling the rapid acquisition of a wide range of tasks within a short period of time.

Adaptive Machines: Learning algorithms for Versatile and Resilient Robotics

Professor Antoine Cully is Professor in Machine Learning and Robotics and Director of Imperial’s Adaptive & Intelligent Robotics Lab.

In the fifth showcase, he walked us through how learning algorithms, world-models and open-ended research can help to increase the versatility and resilience of robots deployed in the real world.

By drawing on international case studies throughout his presentation, Professor Cully confronted the hard truth that robots are still prone to breaking down when we need them most.

As Director of Imperial’s Adaptive & Intelligent Robotics Lab, he is driven by a bold mission to remove humans from risky jobs, ranging from nuclear disasters to concerts and search and rescue missions, by building machines that can adapt, recover, and carry on when things go wrong.

His research combines machine learning, world models, and anomaly detection to help robots learn to correct themselves after damage, compensate for unpredictable forces, and monitor critical infrastructure without invasive connections.

Professor Cully’s vision is clear: resilient, self-correcting robots that do not just perform in the lab but survive and succeed in the chaos of the real world.

Impact is at the heart of this, as Professor Cully shared his belief in the potential of robotics to transform society. His goal is ultimately to remove humans from the most dangerous tasks that we must perform around the globe.

AI-guided Tumour Resection in Robotic Surgery

The sixth showcase was by Dr Stamatia Giannarou, Reader in Surgical Cancer Technology and Imaging at the Hamlyn Centre for Robotic Surgery and Head of The Cognitive Vision in Robotic Surgery Lab.

Dr Giannarou’s presentation showcased her lab’s cutting-edge work on AI-guided, robot-assisted tissue scanning, which are focused on accurately identifying tumour margins during operations, particularly in neurosurgical oncology.

By combining tiny, probe-based microscopes with AI and machine learning, her team can guide tissue scanning with high precision, producing clearer imaging of tissue and more accurate detection of tumour margins. This approach aims to make surgeries safer, improve tumour removal, and move closer to real-world use in operating theatres.

Closing message from the Co-Directors of Human and Artificial Intelligence

The Co-Directors of Human and Artificial Intelligence, Professor Alessandra Russo, Professor Will Branford, Professor Aldo Faisal and Professor Payam Barnaghi, added: 

“We are delighted to have combined a visit from NVIDIA into a full day programme for our student and academic community, featuring technical talks, a showcase of Imperial’s cutting-edge robotics and embodied AI research and tours of our world-leading facilities. Thank you to NVIDIA, and the more than 300 members of our community who attended. We look forward to continuing collaboration with industry and expanding opportunities like this going forward.”

As part of this coverage the Co-Directors of Human and Artificial Intelligence would also like to highlight the longstanding Imperial Robotics Forum, an Imperial network of excellence, which brings together a community of experts from across the university and represents convergence science already in action. 

About Human and Artificial Intelligence

Imperial’s School of Convergence Science is central to Imperial’s Science for Humanity strategy, which was formed to bridge existing silos across the university and further afield.

The School is mission-led - shaped by society’s most complex challenges. It is compromised of four themes, including Human and Artificial Intelligence. 

Discover some exemplar missions the Human and Artificial Intelligence theme is exploring, and connect with the School on LinkedIn.

 

Article text (excluding photos or graphics) © Imperial College London.

Photos and graphics subject to third party copyright used with permission or © Imperial College London.

Article people, mentions and related links

Reporters

Tashiana Langley

Administration/Non-faculty departments