Use your head – guiding principles underpin the use of AI at Imperial, and the main rule is: be critical
All new technology demands a response, and how our community responds to the development, use and integration of AI will impact the leading role Imperial will take.
It’s why we formed the AI Futurists group last summer, a diverse team of five academics from across the faculties of Natural Sciences, Engineering, Medicine and the Business School, who have been devising ways to further Imperial’s AI capabilities in the education space.
Among other things, in the past year we’ve developed a Special Interest Group, delivered two cross-faculty AI hackathons, deployed the Business School Faculty Bot, and also launched a staff-facing Introduction to Generative AI at Imperial course. We’ve established relationships with stakeholders and friends to align AI initiatives with staff and student needs, including the Education Development Unit (EDU), ICT, Centre for Academic English and internal comms teams.
Different disciplines at Imperial have their own bespoke guidance and views on AI, and we suspected it would be difficult, and perhaps not useful, to have one university-wide, prescriptive policy. Instead, we wanted to offer an approach, a way to help leadership think about AI in education, so we drew on something that existed already – Imperial’s values and behaviours framework. Our guiding principles are respect, collaboration, excellence, integrity and innovation.
Many staff want training for tools, but the tools change very fast, so I prefer to focus on skills, competencies and behaviours. Departments have their own positions on whether using AI is permitted for assessments, for example, and the EDU says you can avoid it, have a light touch approach, or openly use it, as long as you are transparent about those choices. Transparency is important, of course, as it’s a pathway to integrity. But I would say the skills of critical reading and critical evaluation, which were taught long before AI, are paramount here.
In my library context, there was some discussion about whether AI literacy should be part of the existing Information and Digital Literacy Framework or something separate, but I feel strongly that it should be part of the existing framework. It’s an extra dimension, not separate. We’ve always had to teach how to avoid plagiarism, for example, and critical approaches to evaluating information sources, and it’s the same with AI.
Working with the Ed Tech Lab, the AI Futurists have devised two courses, one for students and one for staff. While the course for staff encourages reflection and values excellence, the Introduction to Generative AI for students encourages an understanding of what’s actually happening within AI, not just a blind acceptance of the outcome. If you put something in, can you retrieve it? If you’re searching, why? And what are you searching? Which journals? Which authors? Like a library collection, what’s in it?
We’re also encouraging students to think about ethical concerns such as bias, discrimination and risk of harm. We want students and staff to be high-level, critical, demanding consumers of AI.
The benefits are plain to see. In disciplines within medicine, in radiology for example, AI tools are becoming increasingly essential, with implications for employability. That’s great, and we want to make sure the disciplines are learning from each other, which is where collaboration comes in.
Generative AI is not just for report writing, it can be great for coding, images and video. AI Futurist Dr Rhodri Nelson (MSci Physics 2004, MSc Mathematics 2005), a Senior Teaching Fellow in Computational Data Science, has been hosting interdisciplinary hackathons, creating AI agents to tackle a particular problem, making sure we share learnings and don’t work in siloes.
As a group, we’ve embraced the value of innovation, which is an approach in itself. We don’t jump in, we encourage dialogue and critical, thoughtful approaches to using AI. It’s an experiment: let’s try it, then reflect on how it went.
One definition of innovation is you don’t know the outcome, and in terms of what AI will look like in 2030, it’s still uncertain. It’s emerging and constantly changing. But we can identify the skills students will need to cope with it, which goes back to critical thinking and evaluation.
Coco Nijhoff is a Senior Teaching Fellow in Library Services and Imperial AI Futurist.
