Here’s a batch of fresh news and announcements from across Imperial.
From new research into to age-related vision loss, to an exploration of explainable AI, here is some quick-read news from across the College.
An international group of scientists including researchers at Imperial, UCL and Queen University Belfast have found that hard deposits of calcium and phosphates in the retina is associated with age-related vision loss, or macular degeneration.
They say detecting the ‘calcified nodules’ could help early intervention in vision loss, meaning some patients could be treated with simple measures such as modifying their diet.
Co-author Dr Sarah Fearn, from Imperial’s Department of Materials, helped identify the hard mineral deposits in the back of the eye that can be warning signs of vision loss using Time-of-flight mass spectrometry.
She said: “Early changes in the back of the eye can lead to the build-up of hard mineral deposits. The build-up of these mineral deposits are an indicator of irreversible damage of the retina, but finding them early can help doctors prevent or slow down vision loss.”
Teaching Award Winners Celebrate Success
Imperial’s award winning medical education team attended a prizegiving reception in Edinburgh on 7 November.
CATEs (Collaborative Awards for Teaching Excellence) recognise and reward collaborative work that has had an impact on teaching and learning. Introduced in 2016, the Award, established by the Higher Education Academy, which is itself part of Universities UK, is open to all providers of higher education across the four nations of the UK.
Imperial’s Dr Sonia Kumar led the team to success by demonstrating innovative teaching in the Faculty of Medicine. Dr Kumar said: “Winning the CATE is a momentous occasion for all of us in the team, receiving such an accolade and national acknowledgement for our work will serve as a potent catalyst for us to now evolve even further, sharing our vision and way of working with others.”
Machines we can trust
Imperial researchers’ approach to developing explainable artificial intelligence is highlighted in a new longform feature article.
AI systems are capable of increasingly impressive performances, automating tasks and carrying out some even better than we can. As the buzz surrounding AI grows, one pitfall is that the principles the systems use to make decisions are often hidden from end-users and even their designers.
This has consequences. You may not be persuaded to watch the films recommended by streaming services if you don’t know why they have been recommended. More seriously, it is hard to trust the safety of autonomous cars, or the fairness of systems that make financial decisions, if no one fully understands how their algorithms work.
To meet this challenge, researchers at Imperial are developing explainable AI we can trust, and even learn from and collaborate with. You can learn about this work in a new long-form feature: Machines we can trust, learn from and collaborate with.
Want to be kept up to date on news at Imperial?
Article text (excluding photos or graphics) © Imperial College London.
Photos and graphics subject to third party copyright used with permission or © Imperial College London.
Leave a comment
Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.