The Institute for Security Science and Technology caught up with Security and Resilience MSc student Adam Hunter.
What was your background before you came to Imperial?
I left school with no A-levels (well, 2 N's and a U I think meets the threshold of 'no A-levels'!), so started a career as a lab tech working in food safety testing. After a move from London to Sheffield, still working as a lab tech, still in food safety testing, I realised I still wanted to pursue a university education. After looking at the local opportunities, I ended up enrolling on an engineering foundation year at Sheffield Hallam University. Then, partly because I quite liked the TV show CSI, and partly because the course director, Prof Alan Smith, was fantastically encouraging, my undergraduate degree was in Forensic Engineering.
Within a year of graduating, I was working for the Home Office Scientific Development Branch (HOSDB), which led to me spending 15 years in the Civil Service covering a variety of roles, all with a strong science and technology bias. These ranged from managing expert committees, Private secretary for the Chief Scientific Advisor, and, my favourite, blowing stuff up (all in the name of ensuring the protection of the general public).
You recently founded the start-up Inqusiv. What does Inqusiv do and why is it important?
One frustration I didn’t realise I had within the Civil Service was watching data being collected without a clear strategy of how it will be used. With society being more data driven and data being captured at a rate never seen before, using this data in a socially responsible way is essential. It is Inqusiv’s mission to help ease the burdens of the data heavy world by developing tools and providing insights that allow data to be translated into information, allowing users to make more informed decisions and enable socially responsible data usage.
The main focus of our work is to deliver a data platform for maritime navigators in polar and ice infested regions of the world. The initial minimum viable product was developed during the MSc’s Hacking for Defence module. These are run by the Common Mission Project across a range of universities in the UK. To deliver the first platform that allows the sharing of these data, I have the fantastic support from two colleagues from the MSc, Sonakshi Kaskhari and Dr Chris Martin, and the Royal Navy as key stakeholders who are providing operational feedback.
Other areas Inqusiv is looking to provide insights and relieve cognitive burdens are ecology and personal data. For ecology we are looking to develop a platform to draw together and share sightings of moths. With over 2,500 species in the UK alone, this is an insect that can provide valuable insights into the state of the environment, particularly by tracking range and sightings over time. For personal data, we are looking to make terms of service more accessible by using a large language model to make the information held within each document more easily understood.
What is your favourite part of the Security and Resilience MSc?
My motivation to do this MSc was to reinforce knowledge developed over years in Government, and up-skill myself in new technologies that have developed since my first stint in university. This course absolutely delivered on that requirement.
As someone who went in as a physical engineer, I was greatly surprised and impressed by how important the behavioural sciences are. I was deeply entrenched in the school of 'social sciences are not real science' thinking before joining this course. But the module lead of Margaret Wilson and contributors such as Paul Martin, were excellent in highlighting that without understanding the behaviours of people, systems, organisations, and governments, we have no hope in realising the full potential of the world around us.
As part of the Security and Resilience MSc, students complete one or more research projects. Please tell us about your project(s) and what you hope to achieve.
For my MSc I have competed one project and currently working on my final project.
My first project was looking at definitions, trust, and acceptance of AI. From this initial desk review I was surprised to find that a significant number of papers covering AI technology lack fundamental definitions of what AI was, making meta reviews difficult, if not impossible to undertake. With more discussion about machine learning and autonomy, public understanding of these systems has been coloured by buzz terms, to me the worst of which is 'AI'. Whilst there is significant efforts by organisations like ISO and BSI to develop definitions surrounding 'AI', it is my conjecture that the term 'AI' is distracting people from the elephant in the room. That is: for 60 years, society has slowly developed its trust and acceptance in deterministic computing.
Essentially, a computer will provide a consistent and repeatable answer each time you supply it with the same input. With increasing power of computing and new approaches, such as machine learning, becoming more accessible to end users, there has been a pivot to asking computers different types of question. Society is now pivoting from the deterministic era of computing to a probabilistic one. That is, with the same data, consecutive outputs using the same input will not necessarily provide the same answer. This has led to my hypothesis that society would be better off discussing probabilistic computing rather than 'AI'.
My final project is looking at a dynamic way of using language in academic papers. The aim is to identify research papers outside of the user’s field of expertise that might be of use to their research. I hope to deliver an approach of using Large Language Models in a comparative way, but also factoring in temporal aspects of language usage, to identify papers of interest, therefore driving innovation across disciplines. A possible secondary outcome could be to allow researchers to understand how terminology might change over time, this could track how terms such as 'quantum', ‘nanotech', and dare I say it, 'AI', might influence language used in research.
What is your key advice for students wanting to pursue a career in the security and resilience space?
Firstly, knowledge from any field could be applied to security and resilience scenarios. Knowledge I gained from video analysis was used in explosive forensics; from playing Massive Multiplayer Online Role Playing Games, insights on how the in-game chat functions worked was provided to police investigators; understanding of police procedures developed in one role was applied to another role when needing to challenge advisors on their approaches. There is literally no field that is useless, the use case just has yet to be understood.
Also, as someone who is disabled (hearing loss), being different is hugely important when looking at security and resilience. Anyone 'normal' can develop processes that will work and be safe for the vast majority of people. People with something different, whether through ability, interests, values, or ethics bring new insight into how these processes might hamper safe operations or facilitate nefarious activity, which, if addressed will make a better system.
Secondly, think long and hard before agreeing to some roles. My first move within the Home Office Scientific Development Branch had some push and pull factors. The main push factor was due to lack of mental health support available after working with the police on several cases. I am glad to say this has significantly changed, and I have friends currently working in similar roles with a much healthier working environment than I experienced. Just remember consuming media on police procedural dramas/true crime documentaries will not prepare you for a day job covering these fields. Different people can, and will, be affected in different ways, triggers can literally be anything and can change over time but there should be a strong and active support network for you to use if needed. That said, I do look back at these roles with a great sense of pride.
Article text (excluding photos or graphics) © Imperial College London.
Photos and graphics subject to third party copyright used with permission or © Imperial College London.
Institute for Security Science & Technology