Seeing is believing. And now, new imaging techniques are taking human vision beyond nature.

Glasses A

Jack Maxwell

Prescription: Right eye -.0, left eye -.25 / Research team: Adaptive Optics in the Photonics Group, Physics department / Research projects: 3D refractive index tomography

Glasses B

Vanya Valindria

Prescription: Short-sighted: right eye -6.50, left eye -7.0 / Research team: Biomedical image analysis (BioMedIA) / Research project: Analysis of whole-body human MRI images

Glasses C

Matthew Lee

Prescription: Short-sighted: right eye -1.5, left eye -1.25 / Research team: Biomedical image analysis (BioMedIA) / Research project: Developing machine learning algorithms to extract clinically useful information from biomedical images, to aid doctors in diagnosis or treatment planning

Glasses D

Elizabeth Noble

Glasses E

Menglong Ye

Words: Becky Allen Photography: Joe McGorty

It started with a piece of glass, possibly ground out of rock crystal in Italy, probably at some point in the first century BC: a tool to magnify what can be seen. Crude, yes. But effective. No wonder people were fascinated: it was a chance to see beyond the limits of our own senses.

Today, that fascination is even more intense as imaging science enables us to perceive not only the smallest phenomena in the universe but also some of the largest. But seeing is just the beginning; by harnessing the incredible power of modern imaging science – from medicine to robotics, and biophotonics to environmental sensing lasers – we can take human visible perception and understanding to new levels.

It all starts at the very highest level, way out in space, where scientists want to monitor climate change, pollution and even vegetation health using a new laser system being developed for the European Space Agency.

“Imagine a sensor small and powerful enough to be placed in a micro-satellite and which can assess the chemical state of an entire ecosystem with centimetre precision,” says the project’s technical lead, Dr Gabrielle Thomas of the Department of Physics. “The thing we are most excited about is that it can be used for remote monitoring of vegetation. It’s technology that could transform agriculture and land management, and support government and humanitarian efforts to manage and conserve resources across the globe.”

I cannot imagine how I could live without glasses. When I put them on, things that were previously blurred or unrecognisable become clear. In a similar way, before we run our software on whole-body MRI images, all we see is the regular MRI scan. But after the software processes the image, we can detect, segment and recognise multiple organs"

Vanya Valindria

Using lasers rather than relying on the Sun as a light source means remote sensing devices can work 24 hours a day and deliver much greater detail. By emitting 10,000 pulses a second, each only a few nanoseconds long, Thomas’s laser will be able to resolve things down to around 15 centimetres, compared with tens of metres from devices that use sunlight.

What is unique about her laser, however, is what it will be able to see, and that is thanks to alexandrite, a ruby-like crystal more commonly used by the cosmetics industry to remove unwanted hair, tattoos and wrinkles. Alexandrite produces light between the red and infrared end of the spectrum, and because this is where the response of green plants to incident light changes rapidly from absorption to reflection, it means the laser can accurately measure the health of a forest or crops in a field. And by making it small and light enough to be mounted on mini-satellites and drones, the laser will be able to monitor the health of huge swathes of forest or arable land.

“By 2050, we need to increase the amount of food we produce by 70 per cent, and we need to manage our growing population’s impact on the planet. By helping farmers spray or irrigate crops only where needed, our system could be a really powerful tool in areas where resources are limited.”

The light produced by alexandrite covers just one part of the optical spectrum and, of course, scientists image using the full range. One optical technique, fluorescence imaging, has its roots in the sixteenth century, but the observation of green fluorescent protein in jellyfish in the 1960s began a transformation in imaging science. Today the phenomenon, where a material absorbs light at one wavelength and emits light at a longer wavelength, underpins many of the revolutions in microscopy that are transforming bioscience.

Fluorescence microscopy generally is so sensitive that individual fluorescent molecules can be tracked in space and time, and the ability to map specific proteins in live organisms enables scientists to directly ‘watch’ biological processes. At Imperial, life scientists work in multidisciplinary teams with physical scientists, like Professor Paul French of the Photonics Group, to gain further information about biological processes that were once beyond our sight. “This is an exciting time for biophotonics,” he says. “Traditional barriers to observation are being pushed back and life scientists are learning about biomolecular processes with unprecedented detail, speed and physiological relevance.”

French is developing two techniques known as fluorescence lifetime imaging (FLIM) and fluorescence resonance energy transfer (FRET), which allow researchers to watch molecules in time as well as space, and see different molecules interacting. Working with chemists and biologists, his group slowed how FLIM and FRET could be used to screen for inhibitors of the HIV virus. Today they are extending this approach to complex 3D cell cultures and live disease models such as zebrafish.

According to French: “If you’re studying a disease mechanism, you ideally want to see how molecules are interacting in live organisms – or at least in 3D cultures – rather than in thin layers of cells on a glass coverslip. These may be easier to image but can often give ‘false positive’ indications, for example of the performance of a drug candidate.” FLIM provides useful readouts of biomolecular interactions, even when the image quality is degraded. This can allow drug discovery to be done in realistic biological environments, which is important because many drugs that work in very artificial conditions can fail in the clinic. Testing them in a more realistic environment means that more drug candidates will fail, and weeding these out as early as possible is vital because clinical trials of new drugs are hugely expensive.

A key challenge in imaging science is that image data analysis is still dependent to a large degree on human interpretation, but two other projects currently being developed at Imperial are matching advances in imaging science with leaps in the technology to support it.

“Imaging is getting ever more complex, but humans have reached the limit of what they can process,” says Dr Ben Glocker of the Department of Computing, who trains machines to detect patterns of disease in medical images. “Although humans have an amazing ability to detect visual patterns and abnormalities, we find it difficult to analyse multi-dimensional, highly complex data, and to see changes over time.”

Which is where artificial intelligence comes in, he says: “Where machines really shine is making sense of large amounts of data and quantifying anything in that data, so we’re developing software to augment human skills by extracting clinically useful information from medical images that allows doctors to make the best decisions.”

As well as studying traumatic brain injuries and the potential to use machine learning to help us find imaging biomarkers for hard-to diagnose neurodegenerative diseases such as Alzheimer’s, one of Dr Glocker’s main areas of research is teaching machines to spot brain tumours in MRI scans. But whereas humans can learn from prototypes – show us one bicycle and we can recognise any bike – computers need many examples from well-defined settings. “To teach them how to find brain tumours, we use lots of MRI images from patients with brain tumours, and then use pattern recognition techniques to analyse them automatically so the computer figures out what it is in the image that tells us it’s a brain tumour.”

But what happens once a tumour is identified? Alongside Dr Glocker’s work, Dr Stamatina Giannarou, an engineer and computer vision expert, recently began a Royal Society fellowship to use real-time multiscale imaging and robotics to help surgeons identify and remove those brain tumours more successfully. It is, she says, a hugely important area of research: “I picked neurosurgery because brain tumours still kill more children and adults under the age of 40 than any other cancer, so there’s a big space to help surgeons improve their outcomes.”

Cancer surgery’s main aim is to completely remove a tumour, at the same time as causing minimal damage to surrounding healthy tissue. For cancers elsewhere in the body, surgeons can ensure they excise the whole tumour by removing a little healthy tissue too. In the brain, however, this isn’t an option because losing healthy brain tissue can have catastrophic results.

At the moment, surgeons use pre-operative scans like maps to guide them during surgery, but these snapshots have major limitations. According to Giannarou, who has watched many such operations: “Once the surgeon opens the skull, you get what’s called ‘brain shift’, the brain deforms and what you saw on the MRI isn’t the same as what you see now.” Instead of maps, she wants to give surgeons something more like the equivalent of GPS. “The idea is to use endomicroscopy and other imaging techniques during the operation, and combine this with robotic tools for accurate scanning,” she explains.

Robotics are vital because endomicroscopy probes are so small and bendy that even the steadiest surgeon’s hand would be unable to scan brain tissue with the necessary precision to collect optical biopsies and keep track of where the probe has been. The final element of the platform will be a database of images to help the surgeon make diagnoses in real time during the operation.

Giannarou believes the platform will benefit patients, surgeons and healthcare systems. “By resecting all the tumour and preserving all the healthy tissue you have better outcomes – increased survival and better quality of life. It also means you need fewer follow-up operations, which helps patients and the health service,” she says. “And by enhancing surgeons’ vision – giving them better navigational and cognitive cues – it will make brain surgery less stressful for surgeons.”

Finding real-world applications for advanced imaging techniques is what it’s all about. And it’s something that excites Gabrielle Thomas, who hopes her space laser system could be sitting on the launch pad in the next ten years. “It’ll be nerve-racking but super exciting,” she says. “And it could be so useful to so many people. That’s the most rewarding part. You spend so much of your life in the lab, it’s so abstract, but this could represent a step-change in the way we see the world and use those images to make a fundamental difference.”