MRes students work on their research project throughout the year. You can apply for one of the projects listed below, or contact your preferred supervisor to discuss a different project.
You must name at least one potential supervisor in your personal statement when you apply.
Applications will be considered in three rounds. We encourage you to apply in Round 2 or 3. If you are applying in round 4, some projects may have already been allocated so please consider including a second or third choice project in your application.
Visit our How do I apply? page for full details of the application process including deadlines.
Projects available for 2026-27 entry
- Dr Amy Howard
- Dr Andriy Kozlov
- Professor Anil Bharath
- Dr Amanda Foust
- Dr Chris Rowlands
- Dr Guang Yang
- Dr Kaushik Jayaram
- Dr Laki Pantazis
- Professor Dario Farina
- Dr David Labonte
- Dr Hayriye Cagnan
- Dr James Choi
- Professor Martyn Boutelle
- Professor Mengxing Tang
- Professor Aldo Faisal
- Professor Etienne Burdet
- Professor Holger Krapp
- Professor Manos Drakakis
- Professor Reiko Tanaka
- Professor Rylie Green
- Dr Sophie Morse
Profile: https://profiles.imperial.ac.uk/a.howard
Contact details: a.howard@imperial.ac.uk
| Project title | Description |
| Mapping Brain Connectivity Via Low-Cost 3D Polarised Light Imaging | Polarised light imaging (PLI) is a powerful microscopy method for ex vivo investigations of brain connectivity ("the structural connectome"). PLI utilises the optical property of birefringence within brain tissue to estimate axonal orientations within brain tissue with micron-scale resolutions. However, most PLI systems can only reliably inform on axon orientations within the 2D microscopy plane. This is problematic as it limits the utility of 2D PLI for imaging the 3D trajectories of axons linking different brain regions. Extracting 3D orientations typically requires bespoke set-ups which are expensive, limiting wide-spread access. This project aims develop a low-cost 3D polarised light imaging system for accessible, high-resolution connectomics. This will include microscope development, data acqusition using postmortem brain samples, image analysis and connectivity mapping. Investigations could consider whole-brain connectomics, or focus on specific structures such as the hippocampus which has key functions in memory and learning, and is implicated in conditions such as Alzheimer's disease. Comparisons with diffusion MRI acquired in the same tissue can be used to validate a drive methods for estimating brain connectivity in vivo. |
| Can Combined MRI-PLI Analysis Provide Reliable Myelin Estimates? | Myelin, the insulating sheath surrounding axons in the brain, is crucial for the efficient transmission of electrical signals in the nervous system, enabling faster communication between neurons and supporting overall brain and spinal cord function. Reliable myelin imaging is therefore essential to the diagnosis of different demyelinating pathologies such as multiple sclerosis. Polarised light imaging (PLI) - a microscopy method sensitive to myelinated axons in the brain - uses the optical property of tissue birefringence to inform on axonal orientations in ex vivo brain tissue samples. One of the signals from PLI, the tissue retardance, is dependent simultaneously on both the 3D orientation of axons and the amount of myelin in the tissue. Without the use of bespoke set-ups, these two signals are difficult to disentangle, making robust analysis of either property (the 3D orientation or amount of myelin) ill-posed. This project will aim to build on the combined analysis of MRI and PLI data to develop a robust method to simultanously estimate both the axons orientation and its degree of myelination. These data can provide insight on the fundamental understanding of how myelin varies across different pathways in the brain, and the preferential dyemyelination of specific white matter bundles in pathological conditions. Comparisons with myelin-sensitive MRI in the same samples will ellicuidate the extent to which demyelinating pathologies can be detected in vivo. |
| Computational Neuroanatomy: Machine Learning for Microscopy in the Brain | The brain’s microstructure is remarkably rich and diverse — from the layered organisation of the cortex to intricate bundles of axons weaving through white matter. These features are vividly captured in high-resolution microscopy images, revealing striking regional differences in cellular and fibre architecture. In this project, you will use image analysis and machine or deep learning techniques to extract meaningful biological information from microscopy data and map microstructural variation across the brain. You will have access to detailed stained histological sections highlighting cell bodies and axonal architecture, as well as polarised light imaging (PLI) data that describes fibre orientation in fine detail. Students are welcome to propose their own ideas, but possible project directions include: - Cell segmentation and classification of cell types - Identification of cortical layers - Multi-modal brain parcellation or region classification - Extraction of spatial gradients across the brain - Comparison with MRI data acquired in the same brains - Cross-species analysis to explore similarities and differences across rodents, monkeys, and humans This project offers an opportunity to apply and expand skills in image analysis, machine learning, and neuroscience. It is well suited to students interested in medical imaging, neuroscience, and machine learning. |
| From MRI to Microns: Mapping Brain Gradients Across Scales | How do microstructural features of the brain — like cell density, laminar organisation, or fibre architecture — relate to large-scale organisation seen in MRI? In this project, you will explore this question by extracting microstructural gradients from microscopy data and comparing them to macro-scale gradients derived from MRI, all within the same brain. You’ll work with high-resolution histological data (e.g. Nissl, myelin, PLI) alongside co-registered MRI (e.g. T1, diffusion), using dimensionality reduction techniques (e.g. diffusion embedding, PCA) to generate and compare gradients across modalities. The aim is to improve ultimately in vivo human MRI analysis by providing biological validation for MRI-derived gradients, informing more anatomically grounded brain parcellations, and supporting the development of imaging biomarkers linked to underlying tissue properties. The project is well suited to students interested in multiscale brain mapping, medical imaging, and computational neuroscience. |
| Tracking Connections: Better Methods for MRI Structural Brain Mapping | Mapping how the brain is structurally connected is central to human neuroscience, yet our abilities to achieve this in living people (via MRI tractography) remains limited in accuracy, particularly in complex regions such as subcortical structures, crossing fibres, and cortical entry zones. This project will explore new tractography methods using multi-modal MRI and/or the incorporation of high-resolution microscopy data (e.g. Nissl, myelin stains, polarised light imaging) for model development or validation. Depending on interests, the project can focus purely on MRI-based approaches, or expand to multimodal analysis combining MRI and microscopy. Possible directions include: - Developing new fibre tracking algorithms using machine or deep learning (e.g. reinforcement learning) - Designing tractography methods specialised for subcortical or superficial white matter regions - Mapping cortical fibre fanning and entry patterns, using microscopy for validation or model inspiration - Creating high-resolution tract reconstructions in bespoke datasets and translating these methods to standard human MRI data Our aim will be to develop and evaluate methods that ultimately benefit in vivo human connectivity mapping by improving orientation estimation, resolving complex white matter configurations, and incorporating biologically informed constraints derived from MRI and/or microscopy. This project is well suited for students interested in medical image analysis, brain connectivity, machine learning, and neuroanatomy. |
| Redundancy or Specialisation? Linking Brain Connectivity to Microstructure | Understanding how different brain regions are connected — and how their microstructure supports these connections — is a key challenge in neuroscience and neuroengineering. This project will explore whether structurally connected brain regions are also microstructurally similar, and how this relationship varies across functional networks in the brain. You will use a unique multimodal dataset that pairs diffusion MRI (which maps structural connections) with high-resolution histology (e.g. Nissl and myelin stains, polarised light imaging) that captures microstructural features such as cell density, layer thickness, and fibre orientation. The aim is to develop a quantitative analysis framework that: - Measures microstructural similarity between brain regions - Links this to connectivity strength and distance - Tests whether similarity is greater within networks (supporting redundancy), or systematically different between networks (supporting computational specialisation) You may also explore whether these patterns follow smooth microstructural gradients across the brain or reflect discrete network boundaries. This project will involve biomedical image analysis, feature extraction from multimodal imaging, and applying computational models to study brain structure–function relationships. It is suited to students interested in neuroimaging and computational neuroscience. |
| Biophysical Modelling of Tissue Microstructure with MRI | MRI — and especially diffusion MRI — lets us explore the microstructure of tissue, revealing details like cell shape, size, and composition without the need for invasive procedures. To interpret these signals, we use biophysical models: mathematical representations of how water moves through brain and body tissue.By modelling different tissue types with simplified geometries (for example, “sticks†for axons in white matter or “spheres†for cell bodies in grey matter), we can predict the MRI signal and estimate key features such as axon density, fibre orientation, or cell body size. These models act as a kind of in vivo microscope, offering insight into the living brain and body. They also provide a powerful tool for detecting and monitoring changes caused by cancer, disease, or injury. Students are welcome to propose projects to: - Develop or extend models to represent specific disease-related changes in brain tissue - Optimise MRI acquisition protocols to better measure particular features - Design and implement novel model fitting strategies to extract useful parameters from data - Build digital twins of brain tissue, using Monte Carlo simulations to track how water molecules move through realistic 3D meshes, providing a more flexible alternative to simplified equations This project is well suited for students interested in biomedical imaging, mathematical modelling, and applying engineering methods to brain health. Depending on the students interests, the project can lean more towards theory, simulation, data analysis, or experimental design. |
| High-Resolution Mapping and Modelling of Hippocampal Connectivity | The hippocampus is a critical brain structure involved in memory and navigation, but its internal connectivity is challenging to capture with standard MRI due to its small size and complex organisation. In this project, you will develop methods for high-resolution mapping of hippocampal connectivity, using data from either high-resolution diffusion and structural MRI, and/or microscopy. The aim is to explore how fibre pathways and subfields within the hippocampus can be mapped more accurately at high resolution, and develop a model to translate this detailed information to lower-resolution, standard MRI datasets. This approach aims to make insights gained from specialised datasets applicable to broader populations or clinical settings. Depending on interests, the project may include: - Analysing high-resolution MRI or microscopy data to extract detailed hippocampal connectivity - Combining structural and diffusion MRI for more accurate mapping - Building a model to infer fine-scale connectivity from standard MRI inputs - Testing the method on publicly available human datasets (e.g. HCP) This project is ideal for students interested in neuroimaging, neuroanatomy, data fusion and computational modelling. |
Profile: https://profiles.imperial.ac.uk/a.kozlov
Contact details: a.kozlov@imperial.ac.uk
| Project title | Description |
| Receptive-field features and nonlinearities of auditory neurons. | The project's aim is to characterise receptive fields and nonlinearities in auditory neurones using new receptive-field analysis methods. Data will be obtained in the lab. Knowledge of data analysis and proficiency in deep neural networks are required. |
| Biomimetic neural networks | This is a machine learning project that is a continuation of our published work: https://www.biorxiv.org/content/10.1101/2023.10.26.564127v1 Proficiency with pytorch is required. It is appropriate for students with a computational background experienced with training ANNs and interested in fundamental questions about natural and artificial neural networks. For more information candidates are encouraged to read the above paper and contact the supervisor. |
Profile: https://profiles.imperial.ac.uk/a.bharath
Contact details: a.bharath@imperial.ac.uk
| Project title | Description |
| Neural Architectures for Predicting the Behaviour of Dynamical Systems | Fuelled by artificial neural architectures and backpropagation, data-driven approaches now dominate the engineering of systems for pattern recognition. However, the predictive modelling of the behaviour of complex dynamical systems - such as those governed by systems of coupled differential equations - remains challenging in two key ways: (i) long term prediction and (ii) out-of-distribution prediction. Recent progress in disentangled representations (Fotiadis et al, 2021) has nudged the field forward, but it is now time to return to the underlying neural architectures, seeking those that are better suited to the intrinsic dynamics implicit in a system of equations. We seek to explore different approaches to this problem, including Siamese network structures, progressive network growth or, perhaps, neurons which incorporate some form of plasticity. |
| Learning from Mostly Unlabelled Data | Although data-driven machine learning is now being used to build components of AI systems for medical diagnostics, there is the need to find ways of being able to learn representations that are well suited to new forms of imaging data for which ground truth does not yet exist. This problem is not fully addressed in the mainstream AI field, where it is often assumed that data are readily available with ground truth, and that the (data, label) tuples exist or can be made in sufficient quantities. But, as new ways of imaging or measurement become available, the "data" part of that tuple can be subtly or radically different. Subtle differences can probably be handled by transfer learning, but data that is radically different requires new approaches to learning good deep representations. This project will investigate new approaches to self or unsupervised learning that do not rely on ground truth labels and can be applied to different tasks. Promising approaches are i) contrasting learning and ii) masked autoencoders. This project will focus on investigating both of these in the context of "world models", and increasingly important topic in AI. |
| Uncovering the neural code of DRL agents | Deep neural networks can contain hundreds of thousands to millions of parameters collected in a layered organisation. The sheer number of parameters assigned to a single neuron makes the interpretation of how the network achieves a desired function (such as solving a control task, or a visual task) difficult to understand. Interestingly, experimental neuroscience has developed a series of tools that can be applied to problems of this complexity; even though complete "explainability" of how a neural network works is very difficult, there are nice principles that can be investigated. One of these principles is population codes, by which we mean the manner in which a group of artificial neurons jointly encode a stimulus or state of the world. The same principle can be applied to understanding how an RL agent controls (though a policy) a robot arm, or performs a tracking and detection task. Our aim in this progress is to use response weighted noise averaging, derived from neuroscience, but applicable to explaining some aspects of neural coding. The project builds on strong work from an undergraduate project, and several PhD projects, in order to take a step forward in explanability of neural networks through concepts borrowed from neuroscience. |
| Forget about it: Machine Unlearning of Individuals’ Data | Motivation: Large Language Models (LLMs) are quickly becoming ubiquitous tools in many domains improving the efficiency of workers performing repetitive text-based tasks, such as writing and summarizing documents, drafting reports, or editing text. This comes at a hidden cost, however, as the LLMs are trained on vast amounts of data, which was contributed by humans, potentially without their consent. Once used for training, the example contributes to the LLMs outputs for the entire lifespan of the LLM, potentially indefinitely. This is at odds with the 'right of data erasure' [1] which gives individuals the right to request the deletion of their data from a collection, such as a dataset or a machine learning model. Even organizations in countries that do not formally enforce such laws might be bound by them when processing data from citizens of other nations (e.g., EU), an increasingly likely scenario in a globalised world. This inhibits the use of state-of-the-art LLMs in domains where compliance with rights pertaining to individuals’ data is paramount, such as finance, law and healthcare. This gives rise to the research area of "Machine Unlearning" [2], a collection of methods to remove the impact of selected training data on trained machine learning models. However, it is infeasible to apply these methods to conventional deep neural network based LLMs, because of their size and complexity and because after training, the effect of a single training sample on LLM outputs is intractable. An obvious choice for an architecture that can be used to enforce data erasure is the K-Nearest-Neighbors (K-NN) LLM, where the contribution of each example to each output is exactly quantifiable, and thus training examples can easily be removed from future computations if necessary. Due to this traceability, however, K-NN LLMs have been shown to suffer from a higher risk of disclosing potentially private training data compared to conventional LLMs [3]. This gives rise to the exciting opportunity to propose a methodology that enforces data erasure by relying on K-NN LLMs, and to quantify and mitigate the associated privacy issues [4]. Objectives & Deliverables: In this project, the student will have the opportunity to gain exposure to state-of-the-art research at the intersection of different areas, including Machine Learning, Privacy and Security and Large Language Models. Specifically, the student is expected to (a) implement a mechanism that allows data providers (e.g., patients) to revoke the use of their data for an existing LLM of a specific architecture (e.g. K-NN LM); (b) systematically evaluate the impact of revocation on a selection of down-stream tasks relevant in healthcare settings, such as classification or summarization; (c) evaluate the potential of the modified architecture to disclose sensitive information and (d) investigate approaches to mitigate such disclosure. Co-supervisors: Dr Viktor Schlegel and/or Dr Zhengzi Xu References: 1: e.g., Right to data Erasure in EU’s General Data Protection Regulation 2: https://arxiv.org/pdf/2306.03558 3: https://iclr.cc/virtual_2020/poster_HklBjCEKvH.html 4: https://aclanthology.org/2023.emnlp-main.921.pdf |
| So Different Yet So Alike: Generating Synthetic Examples to Label Sensitive Data without Violating Privacy Laws | Motivation: Large Language Models (LLMs) are quickly becoming ubiquitous tools in many domains improving the efficiency of workers performing repetitive text-based tasks, such as writing and summarizing documents, drafting reports, or editing text. However, due to their size, it becomes increasingly infeasible for small and/or non-technical organizations to deploy their own solutions, forcing them to rely on "big tech" providers like Microsoft or Google who have the resources to host resource-intensive LLMs. These providers might be located in other countries, which violates the requirement of local (national level) data storage and processing enforced by many governments [1]. To address this issue, one idea under exploration is to generate data that is at the same time both different from each individual example but also representative of a collection of examples [2]. This data can be freely shared and labelled (e.g., whether an example describes a smoking or non-smoking patient), and labels obtained on this generated data can then be mapped back to the collection of private records. This approach allows organizations to leverage the advanced capabilities of LLMs without exposing their confidential data to external providers, thus maintaining compliance with local data processing regulations. However, the examples must be generated by considering the trade-off of being similar enough to warrant the same labels but at the same time not representative of any single patient record. Failing to achieve this would constitute a privacy breach [3,4]. Objectives & Deliverables: In this project, the student will have the opportunity to gain exposure to state-of-the-art research at the intersection of different areas, including Machine Learning, Differential Privacy and Large Language Models. Specifically, the student is expected to (a) adapt a mechanism to generate and label synthetic data representative of a private collection of texts; (b) design and implement an algorithm to transfer labels from public to private data; (c) evaluate the overall performance of the method and the potential of disclosing sensitive information by means of existing evaluation protocols. References: 1: SG Personal Data Protection Act 26(1): https://sso.agc.gov.sg/Act/PDPA2012?ProvIds=pr26- 2: https://arxiv.org/pdf/2210.13918 3: https://www.ieee-security.org/TC/SP2017/papers/313.pdf 4: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf |
| New Approaches to Machine Learning | The current approaches to many forms of AI relies on the existence of labelled data, or simulation environments where reward structures can be easily defined. An alternative approach exists in the area of "Active Inference", where the learning algorithm takes the form of seeking to a) reduce uncertainty about predictions of sensory inputs, b) predict the results of actions on sensory inputs and c) some intrinsic reward that is gained in proposing sensory or motor transformations (such as a change of viewpoint) that improve the performance of a) and b). Currently, the key challenge to exploring these ideas lies within implementation. The goal of this project is to explore active inference using language models that have access to variables and sensory input of some form, and harnessing these LLMs either through fine tuning or in-context learning to explore the principles of active inference, and to suggest mechanisms to implement effective learners in different domains requiring active sensing and control. |
Profile: https://profiles.imperial.ac.uk/a.foust
Contact details: a.foust@imperial.ac.uk
| Project title | Description |
| End-to-end neuronal voltage time-series extraction for 4D imaging data | Neurons communicate through electrical and chemical signals. The propagation of electrical signals can be imaged by labelling neurons with fluorophores that transduce changes in membrane potential into changes in fluorescence. A fast camera mounted on a microscope is used to capture the changes fluorescence. The images must then be segmented to extract the functional signals. Signal quality depends critically on the selection of pixels containing the most information about the membrane potential changes. Your project will be to develop an algorithm that automatically identifies which combination of pixels contains the most information about progation of electrical signals between neurons. You will test your algorithms on 4D (3D space + 1D time) data collected in real neurons here in 5th floor Bessemer. The project aims can be adapted to your specific interests and the skills that you would like to acquire and refine. If you are interested in this project and have questions please attend one of two hybrid (RSM 4.05 in persion + Teams) information sessions on: * Thursday June 12th from 15:30-16:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_YjBhODk5ZTUtYzM3NS00ODdmLWE5ZTktMWM2NzMyNzJkNmQw%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) OR * Friday June 20th from 10:30-11:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_OWRhOTJhMjItM2E1ZC00Mjg3LWE5MzEtMDcyZTk0NjcyMTBi%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) |
| Programming fast 4-dimensional neuronal activity analysis | Neurons communicate through electrical and chemical signals. The propagation of electrical signals can be imaged by labelling neurons with fluorophores that transduce changes in membrane potential or calcium into changes in fluorescence. A fast camera mounted on a microscope is used to capture the changes fluorescence in four dimensions (3 spatial, 1 temporal). The ability to analyze voltage imaging movies online during a neurobiology experiment provides the experimenter useful input for choosing the next parameters as an experiment progresses. Your project will be to write a program and GUI in Python that rapidly analyzes voltage imaging movies and displays them in an easy-to-interpret and query format. You will test your program on real data collected on 5th floor Bessemer, and could also be tested by our team during live experiments during the later project phases. The project aims can be adapted to your specific interests and the skills that you would like to acquire and refine. If you are interested in this project and have questions please attend one of two hybrid (RSM 4.05 in persion + Teams) information sessions on: * Thursday June 12th from 15:30-16:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_YjBhODk5ZTUtYzM3NS00ODdmLWE5ZTktMWM2NzMyNzJkNmQw%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) OR * Friday June 20th from 10:30-11:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_OWRhOTJhMjItM2E1ZC00Mjg3LWE5MzEtMDcyZTk0NjcyMTBi%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) |
| Deep Learning Denoising for Maximum Sensitivy Light-field Voltage Imaging | Imaging the membrane voltage of living cells is challenging due to the small size of the rapidly changing signals. For many imaging configurations, we consider that “shot†or Poisson noise, due to variations in photons captured on a finite-area detector, sets the limit on the smallest size of signals that can be resolved. But this may not be the whole story. In fact, while signals of interest are structured in time, shot noise causes random intensity variation over time. To exploit this difference, you will adapt and optimise DeepInterpolation, Support and other deep learning denoising algorithms to extract membrane voltage signals from neuronal light-field image series. The goal is to detect membrane voltage activity that would otherwise be lost in the noise. The project aims can be adapted to your specific interests and the skills that you would like to acquire and refine. If you are interested in this project and have questions please attend one of two hybrid (RSM 4.05 in persion + Teams) information sessions on: * Thursday June 12th from 15:30-16:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_YjBhODk5ZTUtYzM3NS00ODdmLWE5ZTktMWM2NzMyNzJkNmQw%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) OR * Friday June 20th from 10:30-11:00 (https://teams.microsoft.com/l/meetup-join/19%3ameeting_OWRhOTJhMjItM2E1ZC00Mjg3LWE5MzEtMDcyZTk0NjcyMTBi%40thread.v2/0?context=%7b%22Tid%22%3a%222b897507-ee8c-4575-830b-4f8267c3d307%22%2c%22Oid%22%3a%229830bb4c-b5ca-4709-ba7e-4245f6595021%22%7d) |
Profile: https://profiles.imperial.ac.uk/c.rowlands
Contact details: c.rowlands@imperial.ac.uk
| Project title | Description |
| A new way to simulate optical systems | Optical devices are some of the earliest precision instruments ever made, and underpin developments in many fields, not least of which is biology where they have contributed to our understanding of cells, germ theory, neurology and pathology, to name but a few fields. Designing and developing new microscopy tools is therefore of widespread interest, but techniques for simulating optical propagation are either precise but computationally inefficient, or efficient but crude. Small-scale phenomena can be accurately simulated using high-precision finite element methods, which accurately account for diffraction effects, polarization, and nonlinearily, but simulating a microscope objective using these methods is infeasible. Large-scale simulation methods such as ray-tracing can handle macroscopic features, but these simplify the propagation of light considerably, meaning that the user must exercise good judgement when interpreting the results of the simulation. In many optical systems, the propagation through bulk media, such as glass or air, dominate the computational time of the simulation, and as such dramatic time savings can be made by identifying these large, homogeneous regions and not simulating their contents. The student on this project will work on approaches to segment an arbitrary space into a computationally-tractable volume, as well as approaches to combine the finite element simulation with the free-space propagation simulations. As such, a good programming background is very desirable, but enthusiasm and a problem-solving attitude are also important. |
| Developing algorithms to sculpt light in 3D | Photolithography (literally 'light stone-writing') is widely used in the semiconductor industry for patterning microchips, but using light to trigger a chemical or physical change has many uses in biology as well. 3D bioprinting, photodynamic therapy, image recording, and optogenetic control of neurons all employ light to induce a change in a biological system. One important limitation of conventional projection-based optics is that the change is induced by a single photon. This has a subtle problem in 3D applications, because if one wishes to confine the photo-response to a particular plane, the regions above and below the plane are also illuminated. This is a particular problem in optogenetics, a cutting-edge technique in which light is used to excite neurons in the brain. Here, it is desirable to excite one neuron but not the ones above or below, yet this is impossible with conventional optical projection. Fortunately, there is a potential solution - one can use high-speed projectors to make holograms that change very rapidly. None of the projected images are intense enough to trigger a neuron on their own, but the sum of many of them is. One can therefore find a series of patterns that trigger the desired cell, but cause the light above and below the desired cell to 'miss' the important regions, thus having no effect. The student for this project will work on developing an algorithm that can, for a given distribution of neurons, find a sequence of holograms that trigger a single cell but don't affect the surrounding cells. They will develop software to control a projector in order to make these patterns, and if everything goes according to plan, it may be possible to test the algorithms in a laboratory setting. The student for this project should have a moderate to strong mathematical background, and some experience in Matlab or another similar programming environment. If necessary they should have, or be able to develop the lab skills necessary to test their software in real life. |
| Analyzing hyperspectral oncological images using cutting-edge data processing | Deep learning techniques have found considerable use in pattern recognition for image analysis, but in medical imaging there are often additional data dimensions which can be exploited for improved diagnosis. This project will involve one such dataset - hyperspectral Raman images taken from tumour resection margins. In this case, the goal is to identify whether any tumor remains in the image, and if so, where it is located. Neural networks and other deep learning techniques will be used to perform this analysis, incorporating spatial and spectral information to make an accurate diagnosis. |
| Biophotonics | The Rowlands lab develops optical systems (microscopes, spectrometers, displays and so on) for use in biology. Anyone who is interested in designing new systems, building instrumentation, simulation or performing image analysis is welcome. |
| Towards a Raman-Activated Cell Sorting system for cancer screening | Nobody needs to be told how much of a threat cancer poses to the population; even worse, certain types of cancer (such as pancreatic cancer, or certain types of ovarian cancer) are so difficult to detect that once they are observable, the prognosis is very poor. A screening method that can detect the limited number of cancer cells circulating in the blood would be of interest in these cases. Fluorescence Activated Cell Sorting, or FACS, is a routinely-used method for sorting cells into different categories based on fluorescence. Unfortunately, cancer cells aren't fluorescent, and finding a good label is arduous and often ineffective. The alternative is to use some form of intrinsic contrast, such as the Raman effect. The Raman effect allows molecules to be identified by the characteristic vibrational frequencies of the bonds in the molecule itself, thus it is very specific and requires no labelling or staining. The goal of this project is to take the first steps towards a combined Raman-Activated Cell Sorting (RACS) and single-cell sequencing instrument that can identify rare circulating tumour cells early. The student on this project will first be responsible for designing, building and programming a Raman microspectrometer, and then using it to analyse different cell populations (some made up of known cancer cells, some not) to see whether the system can distinguish an individual cancer cell from the thousands of other cells also found in the blood. The ideal student will have a background in programming, some CAD skills, and experience building instrumentation, but these are by no means a requirement; the student will be taught anything necessary that they do not already know. |
| Drugs on Demand - towards an automated synthesis platform | Modern drug synthesis occurs in large chemical plants, or at the very least on a lab bench, and requires extremely well-trained researchers, lots of glassware or plant components, and great expense. This project tries to do away with all of those limitations, allowing essentially any synthesis to be performed on a reconfigurable microfluidic chip. Microfluidics has great promise, particularly for small-scale syntheses, in that it can perform reactions more rapidly, under more tightly controlled and uniform conditions, and in an entirely automated manner. Unfortunately, chip designs for one reaction cannot be easily modified or used for another reaction, which limits flexibility. This new microfluidic chip will be able to emulate any other design, changing reaction conditions and configuration rapidly and easily, ushering in a new era of microfluidic drug synthesis. The student on this project will be working with a postdoc to develop the new microfluidic chip. It uses tiny an array of tiny wax motor valves, so first the student will be responsible for designing and characterizing these valves, before scaling up to larger arrays. The ideal student will have some experience in CAD modelling, design of simple electrical circuits, and basic programming, but these are by no means essential - all candidates will be considered, and any required skills can be taught. |
| Detecting bioweapons with stand-off Raman spectroscopy | Bacillus anthracis, commonly known as anthrax, is a potent bioweapon. Having first been used in World War Two, there have been a number of attacks and close calls, ranging from a 1979 accidental release of spores in the former Soviet Union which killed 69 people, several attempts at terrorist attacks by the Aum Shinrikyo cult in Japan in the 1990s, and the 2001 anthrax letter attacks on senators in the United States. Anthrax is a powerful bioweapon not only due to its pathogenicity, but because it can form spores which are extremely difficult to eradicate. These spores are stable for decades, and are resistant to radiation, ultraviolet light, dessication, extreme heat and cold, as well as a number of chemical disinfectants. Identification and detection of these spores is critical to decontamination of an area after a suspected attack, but for obvious reasons, it is not a good idea for a user to get too close to a suspected contamination. Finding a way to detect these spores at ranges of 10m and above would be extremely beneficial for first-responders who wouldn't have to risk their lives to test a suspected release site. One way to perform this detection is using Raman microscopy. The student on this project will be responsible for building a system to perform Raman detection at long distances, without compromising on sensitivity. This system will be able to detect Bacillus subtilis (a benign analog of anthrax) without the need for the user to come near the sample location, and the project will involve some optical engineering, programming, and potentially some electrical engineering. |
| Speedy Spectroscopy - investigating new ways to speed up vibrational spectroscopy | Raman spectroscopy is an analytical technique which provides a wealth of information about a sample, allowing identification of molecules and even diagnosis of diseases (especially cancer). It requires no labelling of the sample, is extremely specific, and applicable to almost any compound imaginable. Given these virtues, it is fair to ask why it is not more ubiquitous in medical diagnosis, and the answer is that it is painfully slow. Spontaneous Raman microscopy takes around a second to collect even a low-quality spectrum, and this is simply too slow as a tool for mapping tissue, or screening cells. Finding a way to speed the process up would be ideal. In this project you will be exploring techniques to speed up Raman microscopy, for example by using parallel excitation, light-sheet imaging, electron-multiplying CCDs, high-power lasers or high-performance signal-processing methods. Some useful skills might include programming hardware devices / signal processing algorithms, optical alignment or precision machining, but these are not required, and the requisite skills can be taught. |
| World's Fastest Video Camera | High-speed imaging requires specialized cameras to capture fleeting events like explosions, hypersonic flow, or even the passage of light. In this project, we are interested in the oscillations of an ultrasound bubble, which occurs at frequencies of a few megahertz. As such, we will need to build a camera that can image at around one hundred million frames per second, for a duration of around one second. These requirements are far beyond even the fastest cameras available today, necessitating a new development program. The student on this project will be building part of the camera, specifically a small piece of the sensor. Using newly-available silicon photomultiplier arrays, we will be constructing a small-scale prototype with the sensitivity and speed necessary to capture data at these incredible speeds. The ideal candidate will have a good background in electrical engineering, and will be designing and testing readout circuitry for the camera. Once this is complete, they will begin testing a small-scale prototype by building the large-scale optical system required to magnify the bubbles enough to be seen by the sensor. This project will also involve a certain amount of programming, in order to reconstruct the data after the experiment is complete. |
| A New Head Mounted Display Concept: Virtual Reality in a Pair of Sunglasses | In order to experience immersive virtual reality, a display must have a large field of view and a high resolution, otherwise the user will feel like they are 'looking at the world through a toilet roll'. Commercially available head-mounted displays like the Occulus Rift, HTC Vive and Playstation VR solve this problem by placing the screen in front of the eyes, but this is clearly an inelegant solution as it involves basically strapping a brick to your face. More recent designs such as the Microsoft Hololens and Magic Leap One use holographic gratings to project light into the eye, but these have a smaller field of view, leading to the 'toilet roll' problem described above. The Rowlands lab is currently developing a new type of holographic display, which can achieve a large field of view along with high resolution, by making the hologram itself active, rather than passive. Instead of projecting the whole image at once, the display scans a beam across the eye at high speeds, producing the illusion of high resolution but without the compromises needed for the Hololens or Magic Leap One. The student on this project will conduct theoretical and experimental studies into the feasibility of this design. They will be using finite-difference time-domain modelling and fabricating electro-optically active waveguides in an attempt to demonstrate a proof of principle, with the goal of producing a device that can project simple patterns into a stationary eye. The ideal student will have a good background in computer modelling, an interest in microfabrication and photolithography, and possibly some electrical engineering expertise. Any necessary skills can be taught however. |
| Building a next-generation scanning microscope | Scanning optical microscopy is a workhorse tool for modern biology - it can see things deeper into tissue, with 3D resolution, and observe fast dynamic events. Recently, Drs Rowlands and Pantazis have been interested in developing a technology called Primed Conversion (https://www.nature.com/articles/nmeth.3405) in order to make it easier to use for researchers around the world. Primed conversion involves optically tagging cells as they develop, allowing us to trace the development of an organism from a single cell all the way up to a complete animal and seeing which cells are destined to form which parts. The missing piece for the widespread use of Primed Conversion is the integration of the system into microscope systems. The student on this project will build an add-on to a microscope which can perform Primed Conversion, aligning two lasers and scanning them in parallel through the sample. The skills required involve programming, electronic engineering, some mechanical design and some optical engineering, but any skills that the student doesn't possess can be taught. The most important thing is an aptitude for learning quickly and hard work. |
| AstroTIRF: Pinning light to a surface | Total Internal Reflection Fluorescence Microscopy is an imaging technique that can take pictures of cells with incredible resolution - it is able to see things that are the thickness of a virus. While this is very important for imaging of complex cell processes, the limitation is that we can only see the surface of the cell - we can't see inside, as we can with a normal microscope. Nevertheless, it might be possible to interfere two illumination patterns together and combine the high resolution of TIRF with the ability to see features hidden inside the cell. The student on this project will be responsible for delivering on this vision. The student will start this project by modelling the system using optical wave propagation software, before moving on to optics experiments in the lab. Initially work will be on a test system, but eventually will be incorporated into a microscope and used to image cells. The ideal student for this project would have a good background in programming, and some experience with building precise mechanical devices, but the student could be taught anything they need to know. |
| Next-Generation Drug Synthesis: Optimizing bioreactors with lasers | A great many modern drugs are manufactured, not in chemical reactors, but in bioreactors: steel or glass vessels housing many litres of cell culture medium and a colony of genetically-modified cells which produce the drug itself. As this mass-manufacturing technology underpins the production of pharmaceuticals worldwide, there is considerable interest in achieving even modest gains in efficiency and yield which, when scaled out over a large-scale manufacturing process, contribute to dramatic cost-savings. Unfortunately, if optimising a chemical reactor is hard (with all the inhomogeneities in temperature, pressure, reagent concentration and so on), optimising a bioreactor is much harder still, because cells are much more sensitive to their local environment. Fortunately, researchers in the Polizzi lab in Chem Eng, and the Rowlands Lab in Bioeng are working on a way to monitor these cells in situ, using optical imaging and fluorescent reporter cells. The student will work on a system to image the fluorescence from a variety of locations within a large (liter-scale) volume using a large number of optical fibers coupled to a microscope. The student will use the system to monitor reactions in the reactor, and try to reconstruct the resulting fluorescence distribution. The student will need some basic precision manufacturing skills and an ability to prototype ideas quickly, but the most important is a willingness and ability to learn quickly. |
| Advanced Microscopy for Everyone | One of the workhorse instruments in a microscopy suite is the confocal microscope. Unlike a normal microscope, it can image objects in three dimensions, which helps explain why modern laboratories use theirs so extensively, in fields as diverse as histopathology, neuroscience and cell biology. Nevertheless, confocal microscopes are very expensive, costing hundreds of thousands of pounds in many cases, despite containing no particularly expensive parts. This enormous price puts the instrument out of reach of researchers in the developing world, and even several laboratories in developed countries as well. This project will seek to redress this balance, by developing a confocal microscope using modern low-cost rapid prototyping facilities, off-the-shelf microcontrollers and careful design, broadening access to this core technology throughout the world. The student on this project will be responsible for building this instrument, based on a modern design known as a 'rescanned confocal'. This will require some work with a CAD package (like Solidworks), some 3D printing or CNC machining (possibly outsourced) and a bit of programming experience. Students should not be put off taking this project if they don't feel they possess these skills though, as they can be taught. Motivation and a willingness to learn is much more important. |
| Seeing the world in hundreds of colours: SERS tags for biology | Fluorescence microscopy is performed by countless labs around the world, labelling their molecules, proteins, membranes or organelles with a bright, fluorescent label which can be seen under a microscope. Unfortunately, there are a limited number of fluorophores that can be seen in the same image - separating them by their colour gets progressively harder because they all emit over a broad range of wavelengths which are difficult to separate. The same is not true for Raman spectra; these contain very sharp spectral features and can be easily identified from their spectral patterns, but the Raman effect is very weak - taking Raman maps of a surface is very laborious. One solution to this problem is to use the Surface Enhanced Raman Scattering (SERS) effect. SERS occurs when an analyte interacts with a gold nanoparticle, which enhances the electric field substantially. Since the Raman effect scales as the fourth power of the electric field, a modest 100x field enhancement results in 10^8 increase in the Raman effect, making it a bright and efficient molecular tag. This project will be to investigate the use of SERS particles as tags, from modelling the electromagnetic properties of these SERS particles, to using them to image tens of different features in a small cell. This project is quite open-ended, and would therefore suit a range of students, from those interested in computational modelling to people interested in microscopy, instrument development, or even wet chemistry. |
| Watching Sound - creating a new technique for stand-off ultrasound imaging | Ultrasound is one of the safest, cheapest and most powerful ways to image deep within the body. Compared to MRI it is fast, easy to use and significantly less onerous on the patient. Nevertheless, there are limitations which we are working to overcome. All current forms of ultrasound imaging require the user to place an ultrasound probe in contact with the skin. This in turn requires a skilled ultrasound technician to apply ultrasound gel and move the probe to image the organ of interest. A more elegant solution would be to use optical imaging to see the acoustic signal (as well as exciting it), thus removing the need for the technician, gel or even for the patient to lie on a bed. The acoustic signal could be simply recorded by imaging the patient's body with a very fast camera. The Rowlands lab is working on developing optical ultrasound detectors based on evanescent wave sensors; these are extremely sensitive to minute changes in the position of an array of nanoparticles, and thus to a passing acoustic wave. The student working on this project will help develop this new type of ultrasound detector, building the nanoparticle suspension, excitation optics and imaging / readout. The ideal student would have a background in the physical sciences or engineering, with a willingness to try new things and learn. The Rowlands lab is highly multidisciplinary, with lots of different researchers studying lots of different things, so new perspectives and approaches are encouraged. The student can be taught most (if not all) of the skills and techniques they will need to know. |
| Dynamic Dichroic Mirrors - making reprogrammable optical filters for stand-off chemical imaging | Hyperspectral imaging is used in applications from chemical weapon detection to cancer diagnosis, from fraud monitoring to industrial quality control. Currently wide-field camera-based hyperspectral imaging systems are based around single filters - you must know exactly what you're looking for in order to select the right filter. The Rowlands Lab is working on a new type of optical filter which can be reprogrammed at will, allowing arbitrary chemicals to be searched for, for example. Currently, the optical components have been assembled, but need to be tested and new materials tried out. The student on this project will be responsible for taking this system from prototype stage to working tool, and will have to develop a number of skills, from instrument development and debugging, to materials development and optimization and finally development of robust testing methods. There is also the potential for publication or intellectual property development, should groundbreaking advances be made. The ideal student on this project would have a willingness to learn, adaptability and some background in the physical sciences, engineering or computer science. That said, talented students from any background will be considered, and the relevant knowledge taught. |
| Making a true 3D camera | When it comes to microscopes, there are no shortage of approaches to imaging a 3D sample: multiphoton microscopy, light-sheet microscopy, confocal and so on. What is notable about these techniques however is that they work by imaging a volume one plane at a time, and thus aren't really imaging in 'true' 3D. This project will change all that, as the student will be working on a system that can really image a volume (animal heart, brain, cancer organoid, tissue sample etc.) in 3D. The system itself is based on a design called a Framing Camera. This uses a mirror to reflect light to a number of cameras, each of which can see a different plane in the sample. The student in this project will be constructing the prototype of this system, which will involve assembling the cameras and the optical system, programming the mirrors, and ultimately building the world's first true 3D microscope. The ideal student for this project will have a good background in mechanical, electronic or software engineering, and a keen interest in picking up new skills. He or she will be ambitious and self-motivated, and a quick learner. There is no specific requirement on skills as these can all be taught. |
| Intelligent Imaging - tagging and tracking cells in 3D | In collaboration with the Pantazis lab, we have created a new type of microscope that can selectively switch a cell expressing a fluorescent protein from green to red. This is useful for a number of cell-tracking tasks, particularly lineage tracing where the cell of interest needs to be tracked along with all its daughter cells. Now we want to take the next step - programming the microscope to track these cells in real time, and "top up" the colour change where necessary. The student on this project will program the microscope to rapidly scan the sample, mapping out its structure and reconstructing the images into a 3D volumetric dataset. The program will then identify regions in which the photoconversion is lacking, return to those locations and photoconverting them specifically. The ideal student would have a decent programming background, an interest in hardware construction / automation and a willingness to learn new skills. That said, the most important thing for this project is self-motivation - the rest can be taught where necessary. |
| Investigating mice with brain cancer by Raman mapping | Raman microscopy is a powerful technique for mapping the distribution of different molecules. Subtle molecular changes can be recorded and used to assess the disease state of a tissue. The Rowlands lab pioneers high-throughput Raman imaging technologies, and in this project the student will be using some of them to investigate the metabolic activity of a mouse brain tumour provided by the Syed lab (Brain Sciences). The student on this project will be responsible for collecting samples, taking both point-by-point Raman measurements as well as light-sheet Raman measurements, and comparing the two. If needed, the student will make modifications to the Raman instrument in order to improve performance. The student on this project should be willing to learn, able to obtain biological samples competently and reliably, possess good attention to detail, and ideally have some programming and / or mechanical engineering expertise. |
| Virtually Microscopic - building a virtual-reality interface to complex microscopic data | The design of a microscope has remained the same for 350 years: the user looks down an eyepiece, moves a stage and focuses the lens to see features of interest in a sample. Nevertheless, the recent availability of low-cost virtual reality systems means that users need no longer be tethered to the instrument; researchers, doctors and students alike can explore the rich datasets that are gathered by modern microscopy, or even guide the microscope in real time, gaining a new perspective which hopefully leads to new insight. As a researcher on this project, you will have good programming skills and some familiarity with complex Software Development Kits (SDKs). You will be programming a head-mounted display to project part of a large microscopic dataset, updating the display as the user moves around the environment. As the project progresses, you will be incorporating control over the microscope as well, rapidly capturing data to allow the user to explore a sample with as much freedom as possible. |
| 3D Print ALL the Things? | Optics laboratories investigate a diverse range of phenomena, such as brain activity, tumour growth, embryonic development and so on, however the fundamental components used in optical setups are similar across labs and experiments. This compatibility allows components to be commercially sourced and easily integrated in a range of set-ups, however these parts are generally expensive and involve international supply chains which can create a barrier-to-entry for many researchers around the world. In this project the student will design a family of optical components that they will manufacture via 3D printing and test in the lab. The end goal is to create an online database of CAD models and assembly instructions which can be freely shared with researchers around the world, creating cheap and fast access to research equipment. The student on this project will be responsible for designing and manufacturing a range of opto-mechanic components. This will involve work in a CAD package to design components, conversion into a printable format, followed by printing and testing the components. Some experience with either CAD or 3D printing would be advantageous. |
| Making the ultimate colour camera - volumetric holograms for hyperspectral imaging | Most cameras view the world with three colours, but the world is really a symphony of wavelengths from the ultraviolet to the infrared. Importantly these wavelengths help us gain knowledge about the world around us - identifying hazards, mapping disease, informing science and so on. The student on this project will work on the world's first true hyperspectral camera - a camera that can see, not in three colours, but a whole spectrum. This will be created by building a volume hologram which splits each pixel into a large number of sub-pixels, each of which is sensitive to a different wavelength. To start, the student will simulate this volume hologram, making sure that it splits light into an array of different colours with high efficiency and minimal crosstalk. Once the ideal volumetric pattern is determined, the student will go on to manufacture the device, and ultimately test it in the laboratory. The ideal candidate for this project will have a background in basic programming as well as a strong problem solving mindset, but as with all student projects in the Rowlands lab, The necessary skills can be taught. |
| Freezing sound - pioneering high resolution ultrasound through optical tomography | Ultrasound is one of the most widely used medical imaging techniques, yet unlike the camera you're carrying in your pocket, the detector has only a few hundred pixels, severely limiting resolution and field of view. Fortunately, optical cameras can record an ultrasound field with megapixel resolution, substantially boosting medical imaging performance. The student on this project will take the first steps towards tomographic imaging of an ultrasound wave, by observing the sound wave as it passes through a person, changing the refractive index as it compresses and stretches the material it is passing through. This project will start by simulating the ultrasound wave and trying to estimate how sensitive the camera needs to be. The student will then set up an experiment in the lab to see whether measured performance matches simulation. The first reconstructions will be done in 2D but ultimately the system wants to be built in 3D to fully capture any ultrasound wave. Students will ideally have a good background in programming, and have good lab skills; the optics and acoustics knowledge will be taught on the job. |
| SIMaging the future | Optical microscopy is widely used in biomedicine as it is easy to use, safe for almost all samples, fast, and versatile. Unfortunately it has a physical limitation called the diffraction limit which prevents it from observing features smaller than ~100nm or so. Structured Illumination Microscopy (SIM) is a technique for increasing the resolution of an optical microscope. It works by taking nine images of a sample with different interference patterns applied to them, then reconstructing these images, using the Moire effect to work out what the underlying sample distribution looks like. The Rowlands lab has pioneered one of the fastest SIM systems in the world, and is in the process of developing version2 which should be practical an commercializable. The student on this project will be responsible for developing an improved version of the existing instrument that is more robust, compact and easier to use than the original, ultimately so that the instrument can be sold commercially. They should be comfortable handling expensive hardware with care, and ideally have a reasonable programming background (although all necessary skills on this project can be taught). The more important attributes are a willingness to learn and be self-motivated. |
| Turning tissue totally transparent | Within the last year there has been a breakthrough in our ability to control the transparency of live tissues. Typically, light scattering is the main reason why we can't see through things, and scattering is caused by microscale inhomogeneities in refractive index. By exploiting a mathematical relationship called the Kramers Kronig relations, we can now tune the refractive index of an object until it matches that of the rest of the local environment, making it essentially invisible. What the vast majority of people have failed to notice is that rather than just flooding a tissue with another medium to change its bulk refractive index, photopatterning can change the refractive index locally, allowing us to *remove* the refractive index differences, rather than just minimising them. The student will use the multiphoton microscope in the Rowlands lab to selectively photobleach a dye that has been infused in to a tissue, in an attempt to demonstrate that the refractive index can be changed to whatever value the user desires. This will involve wetlab work, potentially some programming and hardware design, and potentially even work with live tissues if the project is that successful. All of these skills can be taught, so the main requirement is a willingness to learn. |
| Edible holography | In the news recently it was announced that a common food dye, tartrazine, could be used to change the refractive index of an object. This is important because being able to structure refractive index on the microscale makes it possible to create true volume holograms, which can be observed in all directions and which are visually indistinguishable from the real object. The student on this project will be using tartrazine infused into Agar gels to create these volume holograms. By photobleaching the tartrazine in a carefully defined pattern, the refractive index could be changed to whatever the user desires, allowing these holograms to be built up one pixel at a time. And because the gels are made out of agar and tartrazine, they are (technically) edible. More seriously, these holograms can act as sensors, probes, vision correctors and so on. On the day-to-day basis the student will be creating gels, testing the photobleaching, designing holograms, and ultimately creating arbitrary three-dimensional holographic objects. This will combine optics, programming, web lab work, and analysis, but as all students in the Rowlands Lab are encouraged to make the project their own, the exact balance of these skills is flexible, and all can be taught. |
| Real laboratory automation | Laboratory automation usually refers to the use of robots to perform experiments and other repetitive tasks without user interaction, but another meaning could be to literally automate a laboratory; to control the lighting, shutters, fans and other noise sources automatically in response to voice commands, interlock conditions, number of users, lab temperature or any other condition. Until recently this will be an extreme expensive endeavour, but the proliferation of the low cost home automation equipment and open source projects such as home assistant mean that this can be explored at comparatively low cost. The goal of this project is therefore to make work in the laboratory safer, easier, more repeatable, and with better logging of conditions than previously has been possible. This will involve significant amounts of programming, a modest amount of hardware design, and possibly some interior decorations skills. Fortunately all of these can be taught, although the interior decoration may be somewhat subjective. More seriously, many accidents happen because users are not paying attention to changing conditions or different environments in the lab. Computers do not suffer from this lack of awareness, and by logging everything that happens in the lab and making sure that conditions are as safe as possible, experimental outcomes can only benefit. |
| FLYdom of movement | The behaviour of many organisms is strongly affected by their movement, which often gets ignored when designing experiments; animals are often fixed in place, suspended on treadmills, balls, or floating platforms, so they can be imaged while "moving", using big, bulky microscopes. This is especially true for flies, which lack the strength or size to carry even a simple imaging system. Nevertheless, these configurations are rarely satisfying as an experimental paradigm; it would be better to move the microscope. This is where we step in. The Rowlands lab is working on a high-performance, lightweight robotic fly-tracking microscope that can move to keep a fly within the field of view as it walks around. The student will be refining this instrument, improving the tracking of flies and ensuring that detailed images may still be taken regardless of how fast the fly moves. Ultimately, the instrument will be used to record neural activity in awake, behaving flies as they interact with their environment. The student who takes this project will have a strong grasp of programming, electronics and an ability to succeed despite challenging the limits of what is possible. |
| Streaming Continuous Optical Nanosecond Events (SCONE) | When high-intensity focussed ultrasound is directed at the brain, in can cause microbubbles injected into the blood stream to break through the blood-brain barrier, allowing drugs and other treatments to reach an organ that is usually carefully protected. Now obviously this has very clear medical utility, but the problem is we have no idea how the microbubble does this, and thus can't optimise the process. SCONE is a project to record an oscillating microbubble at roughly 20 million frames per second, so that when it does break through the blood-brain barrier, we can see what it is doing. SCONE requires computational reconstruction to recover the microbubble data; the student will therefore be employing advanced data recovery and modelling algorithms to address this challenge. A strong background in mathematics and / or programming would therefore be advised. |
Profile: https://profiles.imperial.ac.uk/g.yang
Contact details: g.yang@imperial.ac.uk
| Project title | Description |
| Quantitative Analysis of Cell Populations | Understanding the distribution and proportion of different cell types in a sample is crucial for many biological and medical studies, such as cancer research and immunology. In this project, the student will implement image processing techniques to segment cells in immunofluorescence images, followed by identification based on specific fluorescence markers. Quantify cell populations and perform statistical analysis. The target is to accurately quantify and compare different cell populations under various conditions to draw meaningful biological conclusions. |
| Machine Learning for Predictive Biomarker Discovery | Identifying predictive biomarkers from images can aid in early diagnosis and personalized treatment strategies. In this project, the student need to extract relevant features from immunofluorescence images, train Convolutional Neural Networks based predictive models using clinical outcome data, and validate biomarkers. The target of this project is to discover and validate predictive biomarkers that can be used to forecast disease progression and treatment response. |
| Synthesizing Immunofluorescence Images from Bright Field Images | Immunofluorescence imaging is valuable for visualizing cellular structures but is costly and resource-intensive. Bright field imaging is more accessible, so creating immunofluorescence images from bright field images can make detailed cellular analysis more widely available. The student will need to use a generative adversarial network (GAN) trained on paired bright field and immunofluorescence images to generate synthetic fluorescence images from bright field inputs. The target is to develop a tool that accurately produces immunofluorescence images from bright field images, providing detailed cellular insights without the need for expensive staining techniques. |
| Concept and Visual guided Large Vision Language Model for Medical Report Generation | Different imaging protocols can produce varying image data even for the same type of cells. To address this, the student will develop a generative adversarial-based or diffusion-based model trained on paired images obtained from different cell painting protocols. The goal is to standardize these images, making data from diverse protocols consistent and comparable. This tool will harmonize cell painting images, enhancing the reliability and accuracy of cellular analysis without the need for uniform imaging conditions. |
| Enhancing Cell Image Segmentation through Data Harmonization | Cell image segmentation is crucial for accurate cellular analysis, but variations in imaging protocols can affect segmentation quality. The student will develop a harmonization algorithm that standardizes cell images from different protocols before segmentation. By training a machine learning model on harmonized datasets, the tool will improve segmentation accuracy across various imaging conditions, making it easier to analyze and compare cellular structures. |
| Self-supervised learning for cell imaging foundation model | Cell imaging generates vast amounts of data, but labeled datasets are often limited and costly to produce. Self-supervised learning (SSL) offers a way to leverage large amounts of unlabeled data for pretraining models, which can then be fine-tuned with limited labeled data for specific tasks. This approach can significantly improve the performance of models in various cell imaging applications, such as segmentation, classification, and feature extraction. |
| A Novel Framework for Simulation of Training Data for Super-Resolution in Brain MRI: Enhancing Paired LR-HR Dataset Availability |
Contact details: k.jayaram@imperial.ac.uk
| Project title | Description |
| Distributed Tactile Sensing | The project involves creating cockroach inspired antenna with distributed 1D and 2D mechanosensors (inspired by insect campaniforms). We will understand the role of active antenna movements for enhancing sensing and tactile discrimination. These antennae will be integrated on insect-scale robots and demonstrate high-speed tactile SLAM-based navigation (in the dark). Will involve collaborations with Dyson School of Engineering. Expect strong interest in microfabrication, nano-3D printing, laser processing. Experience with clean room procedures and microcontroller programming is an advantage. |
| Modeling of Insect-scale Shape Morphing Robots | The project involves modeling the kinematics and dynamics of insect-scale bioinspired shape morphing robots in IssacGym/ Mujoco to create digital twins. These models will be used for training machine learning algorithms and for developing AI bioinspired controllers for navigating complex terrains. Will involve collaborations with Computer Science Department at Imperial and at ETH Zurich. Prior experience with physics-based modeling softwares in a must. |
| Digital twins of spiders | The project involves modeling the kinematics and dynamics of arthropods in Unity/ Mujoco to create digital twins using high-fidelity tracking data collected (DeeplabCut, Replicant) from spiders moving on a treadmill at varying inclinations (vertical, lateral and upside down). These models will inform the creation of new bioinspired gaits to be deployed on insect scale robots. Involve collaboration with other insect labs in department. Expect a strong background in data processing, programming, AI/ML. Prior experience with computer graphics and modeling is an advantage. |
| Firefly inspired Optical communication for Swarming Drones | The project involves creating a nanoquadrotor (less than 60mm) capable of emulating firefly like communication (flashing and response). The project will involve mechanical design, electronic fabrication of custom controller boards and software development for developing firefly inspired strategies. We expect to field test this system by the end of the project to demonstrate active closed loop communication with fireflies as the first step towards understanding complex signalling. Will involving collaboration with international teams. Looking for background in prior drone design, control and programming. Experience with building custom electronics is an advantage. |
| Plant-sensing insect robots | The project involves creating custom sensors, manipulators and attachment mechanisms for insect-scale robots to sample and deliver biomolecules to plant tissues. Interest in plant science/ environmental monitoring is an advantage. Strong background in flexible electronics design and integration along with software development is preferred. Involves collaboration with other departmental and international teams. |
Profile: https://profiles.imperial.ac.uk/p.pantazis
Contact details: p.pantazis@imperial.ac.uk
| Project title | Description |
| The very first oncogenic hit: watching a single-cell mutation hijack a normal intestinal crypt in real time to initiate colorectal cancer | Colorectal tumours often begin when a single stem cell in the intestinal lining picks up a driver mutation—typically knocking out a gene like Apc, a key gatekeeper in the Wnt signalling pathway. But what happens next has never been directly observed. Does that one cell grow faster than its neighbours? Or does it reprogram the local environment to suppress competition? These are not just questions of cancer biology—they’re questions about how cells compete, cooperate, and take over structured tissues. This project gives you the opportunity to answer those questions using a cutting-edge genome editing system with unprecedented control. You’ll use mouse intestinal organoids—synthetic, miniaturised tissues grown in 3D culture—as your model of the crypt, the basic unit of intestinal self-renewal. These structures are ideal for studying stem-cell behaviour in a setting that closely mimics the native tissue. At the core of the platform is a precision-engineered CRISPR system designed for subcellular, real-time control. It remains completely inactive until illuminated by two intersecting laser beams—a method called Primed Conversion, originally developed in the Pantazis lab. Only where the two light paths meet is genome editing activated. That means you can mutate exactly one stem cell inside a live organoid at exactly the moment you choose—before a division, during stress, or mid-way through a regenerative cycle. All other cells remain untouched. This kind of temporal and spatial precision has never been possible in genome editing before. Existing systems affect whole tissues or entire cell populations, masking the earliest dynamics of cell competition. Here, by targeting a single cell, you can dissect the very first steps of tumour initiation—from the first mutation to potential clonal dominance—in real time, inside a living structure. Practical student experience Over nine months you will: • design and build CRISPR constructs using Gibson cloning, • optimise DNA delivery into 3D organoids using high-efficiency Neon electroporation, • perform single-cell editing using subcellular photo-activation on a Leica Stellaris 8 confocal and a custom-built light-sheet microscope, • acquire and analyse long-term 3D movies of clonal dynamics, • use Python to extract lineage and competition data from image stacks. This project gives you hands-on exposure to synthetic tissue models, programmable gene control, advanced microscopy, and quantitative bioanalysis—a complete pipeline from molecular design to live-tissue dynamics. |
| The birth of an organiser: watching a single engineered cell ignite a Wnt gradient to break symmetry | Embryonic development often begins when a tiny cluster of cells acts as an organiser, secreting morphogens like Wnt3a to polarise an otherwise uniform tissue. But what happens when a single cell takes on this role? Can one cell alone create a true morphogen field and induce organised patterning in its neighbours? This fundamental question — whether one cell is sufficient to break symmetry — has never been directly answered. Doing so requires precise control over both the birth of the organiser and the ability to track its influence over time. This project gives you the opportunity to dissect these events using a cutting-edge synthetic biology platform that tightly couples optogenetic control, cell–cell contact logging, and endogenous fate induction. You’ll use mouse embryonic stem cell aggregates—3D synthetic tissues that mimic the early embryo—as your model system for symmetry breaking. These structures are ideal because they retain responsiveness to morphogen gradients but start from a naive, isotropic state. At the core of the platform is a light-gated genetic switch combined with SynNotch synthetic receptors. A dual beam illumination technique called Primed Conversion triggers one cell to simultaneously begin secreting wild-type Wnt3a and display a membrane-bound GFP ligand. Direct neighbours are permanently logged by SynNotch activation (via mCherry expression), while Wnt target gene expression (T/Brachyury, Axin2) is detected by immunostaining. This separation between physical contact history and Wnt gradient response allows you to map, cell-by-cell, how far secreted Wnt propagates beyond immediate neighbours—and whether true patterning emerges. This kind of temporal and spatial precision—over both the organiser’s birth and its domain of influence—has never been achieved before. Existing methods activate whole populations or introduce global ligands, masking the early emergence of gradients. Here, by controlling a single cell, you can dissect the very first steps of organiser formation: the birth of asymmetry itself, inside a living structure. Practical student experience Over nine months you will: • design and assemble the light-gated organiser constructs using Gibson cloning, • generate stable mouse ESC lines using PiggyBac transposition, • induce single-cell organiser activation using subcellular photo-activation on a Leica Stellaris 8 confocal or custom-built light-sheet microscope, • fix and immunostain aggregates for key readouts (mCherry for contact history, HA for Wnt secretion, T/Brachyury and Axin2 for Wnt response), • acquire and analyse 3D confocal image stacks to extract spatial patterns of fate induction, • use Python to segment cells, map distance-dependent signalling, and quantify gradient spread. This project gives you hands-on exposure to synthetic developmental biology, programmable cell signalling, live-tissue optogenetics, advanced microscopy, and spatial data analysis—a complete pipeline from molecular design to real-time morphogen mapping. |
| Beyond fluorescence: watching genetically-encoded bioharmonophores assemble in live mammalian cells | Fluorescent proteins have shaped modern cell biology, but they come with trade-offs: they bleach, saturate, and blur under high-intensity light. These limitations cap what we can see—especially when tracking fast or subtle dynamics over long periods. What if we could express a label that never fades, never saturates, and produces a clean, quantifiable signal—one that’s visible through even the densest cellular environments? This project gives you the opportunity to build and validate a new class of genetically encoded imaging probes—bioharmonophores—in live mammalian cells. These are not fluorescent proteins. Instead, they are small peptides that self-assemble into non-centrosymmetric nanocrystals inside protein shells, generating second-harmonic generation (SHG) signal: a narrow, photostable, and unbleachable optical readout. At the heart of the system is a synthetic expression circuit: peptides with strong SHG potential are targeted to encapsulin nanocompartments, where they remain inert until released by a genetically encoded TEV protease. This two-part design allows for controlled liberation and local concentration of SHG-active sequences, triggering their self-assembly into crystalline structures that produce strong SHG signal when imaged. This project focuses on the critical proof-of-principle: can we trigger SHG-active peptide self-assembly inside living human or mouse cells? This has never been shown. Success would confirm that bioharmonophores can be expressed, activated, and imaged in standard cell culture, opening the door to genetically programmable, multiplexed SHG imaging in deep tissues. Practical student experience Over 9 months you will: • design and model SHG-active peptide candidates for expression in mammalian cells, • co-express peptides with encapsulin shells and a TEV-protease under inducible control, • validate encapsulation and proteolytic release by western blot, microscopy, and SHG polarimetry, • culture human and/or mouse cells to assess intracellular assembly efficiency and compatibility, • quantify SHG signal strength across time and conditions using an existing Zeiss two-photon microscope platform • analyse optical signatures to correlate peptide identity, structure, and signal intensity. This project gives you hands-on experience with synthetic gene circuits, nonlinear optics, mammalian cell engineering, signal quantification, and peptide design—a complete pipeline from molecule to imaging outcome. |
| Advancing Mechanobiology with ChemiGenEPi Biosensors | Cells are constantly pushed, stretched, and squeezed—and they feel it. Mechanobiology is the science of how cells sense and respond to forces, a process that drives development, organ function, and disease. At the centre of this force-sensing machinery is Piezo1, a pressure-sensitive ion channel. Our lab has already introduced GenEPi, the first genetically encoded Piezo1 activity reporter, but we want to go further. What if you could watch Piezo1 at work in real time, in living tissue, with sensors that are brighter, more stable, and tunable across colours? This project gives you the chance to build exactly that: ChemiGenEPi1.0, a brand-new chemigenetic biosensor. By combining the genetic precision of GenEPi with the dye-based flexibility of WHaloCaMP, ChemiGenEPi will let us see Piezo1 activity with unprecedented clarity—from single cells in culture to developing zebrafish embryos. Practical student experience Over 9 months, you’ll: • design and engineer the ChemiGenEPi sensor using HaloTag chemistry and advanced dye-ligands, • test its performance in live-cell imaging assays (brightness, photostability, dynamic range), • push it further into in vivo models to see mechanosensation unfold in real biological contexts. This project is hands-on at the interface of molecular engineering, synthetic biology, and cutting-edge imaging. You’ll gain experience in biosensor design, live-cell fluorescence and FLIM microscopy, and in vivo validation. |
| PhOTO-Bow: Painting Cell Lineages with Light | Every organism begins as a single cell, yet by adulthood becomes an intricate mosaic of billions. How do individual cells decide who they will become, where they will go, and how their descendants contribute to health or disease? To answer that, we need to see the entire history of each cell— who it divides into, how it moves, and when it changes fate. Traditional lineage-tracing tools are powerful but blunt: they label too many cells at once or rely on stochastic recombination without spatial or temporal control. PhOTO-Bow changes the game. It merges two powerful technologies - primed conversion single cell labelling and Cre/lox rainbow recombination - to create a system where you decide when and where every colour appears. By combining the pinpoint accuracy of light-activated primed conversion with the stochastic diversity of the Brainbow system, PhOTO-Bow allows you to illuminate the story of tissue development and disease in living organisms. Each cell’s colour becomes a barcode of identity, history, and fate—recorded directly in its fluorescence. Practical student experience You will: Test PhOTO-Bow constructs to integrate precise light control and randomised recombination. Perform primed conversion using dual-beam illumination to trigger spatially confined activation at the single-cell level. Induce rainbow recombination through light-controlled Cre activity, permanently marking clones with unique spectral fingerprints. Track clonal expansion and migration across development using Leica light-sheet microscopy. Quantify lineage dynamics with advanced 3D image segmentation and Python-based tree reconstruction tools. This project gives you hands-on exposure to synthetic gene circuit design, live-cell imaging and computational lineage analysis - a complete pipeline from molecular construction to visualising multicellular history in living tissue. Why It Matters PhOTO-Bow is not just another lineage tool - it is a cinematic recorder of biology in motion. It lets you watch, in real time, how a single cell’s decision ripples through development or disease. By integrating primed conversion precision with rainbow recombination diversity, this system can resolve cellular ancestry with unprecedented fidelity—mapping how clonal mosaics emerge during organogenesis, tissue repair, or tumour evolution. The resulting colour-coded trees will provide insight into how fate decisions propagate through space and time. |
Profile: https://profiles.imperial.ac.uk/d.farina
Contact details: d.farina@imperial.ac.uk
| Project title | Description |
| Wearable neural interface for neurorehabilitation with physiological characterisation | This project develops a wearable neural interface to support neurorehabilitation by capturing and decoding neuromuscular activity in real time. The system integrates physiological characterisation of motor function with advanced sensing technologies to enable personalised feedback and training. The system is a technology platform that can be applied for various assistive and therapeutic systems, such as prostheses, orthoses, or rehabilitation robots. |
| Developing an ultrasound-based neuromechanical simulation model | Recent advances in ultrasound-based signal processing methods have enabled the detection of motor neuron activity in humans through recordings from muscles that are activated. These advances have great potential in neural interfacing applications such as prosthetic control and neurorehabilitation. To guide hardware development for wearable systems, improve decoding algorithms, and refine existing methodological pipelines, there is a need to generate synthetic ultrasound data that mimics authentic data while providing full control over the model inputs. Therefore, the model is expected to play an important role in the research group’s ultrasound-based activities. |
| emg2querty - development of a transformer-based neural network for computer typing prediction from EMG signals | The first part of the project will involve developing a transformer architecture for EMG-based typing prediction using a publicly available dataset (https://github.com/facebookresearch/emg2qwerty). This model will be evaluated against baseline results to serve as a benchmark. During the second part of the project, the student will utilise the pre-trained model and develop methods for fine-tuning it on new subjects. These methods will then be tested in real-time using a wearable EMG setup. The possibility of publishing the work will be discussed based on the project’s results. |
| Deep inverse modelling for neural circuits | Understanding how neural circuits communicate is essential to uncovering the mechanisms behind brain function and dysfunction. Neural circuit models describe how multiple interconnected brain areas communicate to produce complex spatiotemporal neural activity. These models are particularly relevant to study neurodegenerative diseases such as Parkinson’s disease, where disruptions in communication manifest as oscillatory or bursting patterns in electrophysiological data. By analysing these observed data with inverse modelling, we can infer what properties of the neural circuit models best explain the observed abnormalities and explore interventions to restore normal function. |
| Disentanglement of neurophysiological time series to improve human-machine interfacing | This project aims to tackle one of the most challenging problems in neuroscience and brain-machine interfaces: disentangling overlapping neural signals from multi-electrode recordings. By leveraging state-of-the-art deep learning techniques, particularly sparse autoencoders, we seek to develop a robust, unsupervised method for separating and identifying individual neural sources from complex, mixed recordings. |
| Ultrasound Imaging with Distributed sensors for Human-Machine Interfacing | Ultrasound-based human-machine interfacing is an emerging technology, recently showing strong results for controlling prosthetics devices and virtual reality digital twins. A possible strong direction for research involves the exploration of whether more images can be generated from a distributed sensors positioned on the arm in a “armband-like” fashion. Challenges on sensor localization and beamforming are open and need to be better explored. This project will start with simple water-tank experiments to develop and test these methods and eventually test them in participants for control tasks. |
Profile: https://profiles.imperial.ac.uk/d.labonte
Contact details: d.labonte@imperial.ac.uk
| Project title | Description |
| Body size and the cost of the neural control of movement | Movement is integral to all animals, and emerges from a coordinated interaction between a nervous control architecture and the musculoskeletal system. Traditionally and today still, these two elements are often considered in isolation. But no muscle will generate a force without being instructed to do so, and brains are embodied and thus useless without muscles to actuate – the two systems must be in tune. The ideal tuning may be reasonably expected to change with animal size: an elephant must worry about gravity much more so than an ant, and it is widely accepted that the musculoskeletal system adapted to meet this and other mechanical constraints. What is much less clear is how size impacts the costs and optimal strategy of neural control, and how both interact with size-specific changes to the physical environment. In this project, you will begin the building of a bridge between the two involved fields – neuroscience and biomechanics. You will tackle two related problems. At first, you will estimate the costs of neuronal computation across animal body sizes, and compare it to the costs involved in muscle-driven locomotion. Are neural control costs significant, and how does their relative importance vary with animal size? Next, you will link established mechanical consequences of changes in body size, such as increased limb cycling periods, with equivalent but understudied changes in the optimal neural control strategy, including demands on nerve action potential velocity, muscle activation times and more. Do mechanical and neuronal constraints vary in synchrony, or are demands on one steeper, so that it takes a dominant role in shaping animal movement across body sizes? Suggested reading Hooper, S. L. (2012). Body size and the neural control of movement. Current Biology, 22(9), R318-R322. More, Heather L., and J. Maxwell Donelan. "Scaling of sensorimotor delays in terrestrial mammals." Proceedings of the Royal Society B 285.1885 (2018): 20180613. Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow & Metabolism, 21(10), 1133-1145. Laughlin, S. B. (2001). Energy as a constraint on the coding and processing of sensory information. Current opinion in neurobiology, 11(4), 475-480. |
Profile: https://profiles.imperial.ac.uk/h.cagnan
Contact details: h.cagnan@imperial.ac.uk
| Project title | Description |
| Dual-site transcranial alternating current stimulation for tremor control | Involuntary shaking is a common symptom of Parkinson’s Disease and Essential Tremor, affecting around one million people in the UK. This project aims to leverage plasticity—brain’s ability to adapt and change—for therapeutic purposes by delivering well-timed electrical inputs to key regions across the tremor network. Based in the Cagnan lab, the focus will be on piloting dual-site stimulation of the motor cortex and cerebellum to achieve longer-lasting therapeutic benefits for tremor patients. Your role will include (1) modelling the volume of tissue activated during dual-site stimulation, (2) developing and testing closed-loop control algorithms and (3) developing approaches for efficient optimisation of stimulation parameters. We are looking for a student with strong skills in engineering, instrumentation, and programming, along with a background in neuroscience. 1. Schwab BC, König P, Engel AK. Spike-timing-dependent plasticity can account for connectivity aftereffects of dual-site transcranial alternating current stimulation. NeuroImage. 2021;237:118179. doi:10.1016/j.neuroimage.2021.118179 2. Schwab BC, Misselhorn J, Engel AK. Modulation of large-scale cortical coupling by transcranial alternating current stimulation. Brain Stimulation. 2019;12(5):1187-1196. doi:10.1016/j.brs.2019.04.013 3. Saturnino GB, Madsen KH, Siebner HR, Thielscher A. How to target inter-regional phase synchronization with dual-site Transcranial Alternating Current Stimulation. NeuroImage. 2017;163:68-80. doi:10.1016/j.neuroimage.2017.09.024 4. Fleming JE, Sanchis IP, Lemmens O, et al. From dawn till dusk: Time-adaptive bayesian optimization for neurostimulation. PLOS Computational Biology. 2023;19(12):e1011674. doi:10.1371/journal.pcbi.1011674 5. Cagnan H, Pedrosa D, Little S, et al. Stimulating at the right time: phase-specific deep brain stimulation. Brain. 2017;140(1):132-145. doi:10.1093/brain/aww286 |
| Modulatory role of transcranial stimulation on cognitive control | Everyday decision-making depends on our ability to adapt and sometimes stop actions unexpectedly. This skill can range from something as simple as resisting a tempting slice of cake to something as critical as hitting the brakes in an emergency. Cognitive control can be compromised in a range of neuropsychiatric disorders and remains difficult to restore using invasive and non-invasive brain stimulation techniques. We previously targeted the medial prefrontal cortex, a key brain region involved in response inhibition, using transcranial electrical stimulation to modulate neural rhythms and associated behaviors. This project, based in the Cagnan lab, will focus on (1) data analysis of electrophysiological and behavioral responses, (2) stimulation artifact removal, and (3) modeling behavioral and electrophysiological data. We are looking for a student with strong signal processing skills and a background in neuroscience. Tuning the brakes – Modulatory role of transcranial random noise stimulation on inhibition Mandali, Alekhya Torrecillos, Flavie ... Cagnan, Hayriye et al. Brain Stimulation: Basic, Translational, and Clinical Research in Neuromodulation, Volume 17, Issue 2, 392 - 394 |
| Phase Transitions in Circadian Tremor Patterns | Involuntary shaking is a common symptom of Parkinson’s Disease (PD), affecting approximately 150,000 people in the UK. Tremor in PD can be triggered or influenced by various factors throughout the day (e.g., stress or medication intake), making it crucial to identify trends that are key to effective clinical management. The project will take place in the Cagnan Lab and will involve analysing a unique dataset consisting of long-term (2 years) recordings of PD patients collected via wearable sensors in free-living conditions. Tremor events in this dataset have already been identified through machine learning algorithms. With this project, we will explore the circadian dynamics of tremor in PD over several days using recurrence quantification analysis (RQA), a nonlinear data-driven technique that provides objective markers for regularity, trends, and phase transitions in time series data. A particular focus will be on the impact of patients’ medication schedule changes on these dynamics. We are seeking a motivated student with programming and signal processing skills who is eager to deepen their understanding of the circadian progression in neurodegenerative disorders. |
| Sleep Fragmentation in Parkinson’s Disease and its impact on tremor | Sleep fragmentation, characterised by frequent awakenings or disruptions, has a significant impact on daytime functioning, leading to increased fatigue, reduced motor control, cognitive decline, and heightened stress and anxiety. In Parkinson’s disease (PD), sleep fragmentation can diminish a person’s ability to manage and compensate for daily tremors, worsening their symptoms' severity and duration. While recent evidence supports this connection, a systematic and comprehensive study with a representative PD cohort and long-term follow-up is still lacking. This project aims to investigate sleep fragmentation from multiple angles and assess how it affects the severity and duration of daily tremors in PD patients. The research will be conducted in the Cagnan Lab, utilising a unique dataset containing two years of long-term recordings from PD patients in free-living conditions, with tremor events already identified by machine learning algorithms. We are looking for a motivated student with strong programming and signal processing skills who is eager to better understand the relationship between sleep fragmentation and Parkinson’s disease symptoms. |
Profile: https://profiles.imperial.ac.uk/j.choi
Contact details: j.choi@imperial.ac.uk
| Project title | Description |
| Visualising Sound using Machine Learning and/or Signal Processing Algorithms | Purpose. The purpose of this project is to develop a deep neural network or beam forming algorithms that can reconstruct the location of acoustic sources using multiple microphones. Motivation. In therapeutic ultrasound, a focused ultrasound transducer is used to concentrate energy to a point in the body, allowing us to noninvasively and locally manipulate tissue (tumour ablation, drug release from acoustically-active particles, etc). Our laboratory developed therapeutic ultrasound devices for delivering drugs to the brain (across the blood-brain barrier) for the treatment of brain cancers, neurodegenerative diseases, and other neurological conditions. However, the success or failure of the technique has been difficult to track as clinicians are unable to directly observe what is happening within the body. An emerging way of monitoring this procedure is with the use of microphones located around the focused ultrasound transducer. Sound generated during the procedure are captured by the microphones. We then reconstruct an image of the treated area using passive beamforming algorithms. The reconstruction of a signal source based on multiple sensor signals is broadly known as beamforming. In addition to medical imaging, it is used in underwater acoustics, astronomy, and other disciplines. The problem with many existing passive beamforming algorithms is the poor spatial resolution in the reconstruction of the sound sources. This means we can't precisely locate where the source is coming from. The purpose of this project is to develop a deep neural network and/or signal processing methods that can reconstruct an image of the treated region with better accuracy and spatial resolution. Work description. This work will involve generating training data using computer simulations on a Matlab toolbox known as k-wave. We will then train the deep neural network on PyTorch or develop fundamental signal processing algorithms. We will explore conventional neural networks such as convolutional neural networks, recurrent neural networks, and others; and, potentially, more advanced techniques, such as transformers and physics-inspired neural networks. |
| Optical hand tracking using machine learning | Purpose. Implement optical hand tracking using machine learning, analyse speed and bottlenecks, and explore ways of improving existing methods. Motivation. Optical hand tracking is a method used in virtual reality, augmented reality, and human-machine interfacing as it allows the user to interact with virtual environments and communicate with robots and machines in a natural way. However, optical hand tracking has not been able to achieve widespread adoption due to limitations in speed and precision, and a constant breaking of the immersive experience. The purpose of this project is to analyse existing optical hand tracking methods and quantify their speed, precision, and failure rates; and explore ways of improving the optical hand tracking performance. Work. The student will setup their own optical hand tracking setup using a camera (e.g., a webcam) and write his/her own optical hand tracking method from scratch using python and PyTorch. The student will then improve the algorithm using the state-of-the-art published algorithms and quantify the speed, precision, and failure rates of all of these methods. The student will evaluate the bottlenecks in each of these categories. For example, what is the physical, hardware, or computational reasons for these limitations. Certainly the speed of light is fast and so is not constraining the speed of calculations. Perhaps it's the two-step process of identifying where the hand is in the image and the subsequent steps of identifying where the hand joints are located? Is the constraint due to the hardware's calculation speeds? We will explore these questions and many others. The students will learn how to approach a common machine learning problem with the deep analytical abilities of an academic researcher. Work. |
| Fast Tracking of Physical Objects in Virtual Reality (VR) and Augmented Reality (AR) | Purpose. Create a physical object in the real world that can be tracked in the virtual world with speed and accuracy Motivation. Virtual reality (VR) and augmented reality (AR) provide an infinite world that users can interact with. In many applications, we would like to project our real-world body (hands, legs, etc.) and objects (a ball, stick, cup, etc.) into the virtual space. However, current methods still remain slow, clunky, and inaccurate, breaking the immersive experience that we hope for and creating a disconcerting feeling when using VR and AR. What are the current methods for tracking objects and what are the physical, hardware, and computational limitations? How can we improve upon these limitations? Work. You are tasked with building a physical object that includes trackable sensors: infrared light-emitting diodes (LEDs) and inertial measurement units (IMUs). The IMU provides very fast positional tracking (1,000's of Hz), but suffers from poor accuracy over time. The infrared LEDs provide highly accurate tracking, but is slow (10's of Hz). We will therefore provide fast tracking with IMU that is corrected by the infrared LEDs over time. This combined IMU + infrared LED tracking provides a rapid and accurate method of tracking objects. The object's position will be recreated in a virtual world using Unity. The speed and accuracy will be quantified. You will then explore ways of improving upon key limitations with this method, such as tracking out of the field of view, or adding capabilities, such as user feedback with haptic vibration motors. |
| Ultra-High-Speed Video Camera at 10 Million Frames per Second | Purpose: Develop an ultra-high-speed video camera at 50 million frames per second. Motivation: Certain phenomena, such as ultrasound imaging and therapy with microbubbles (contrast agents) operate in the MHz rate. Such fast dynamics cannot be captured using traditional cameras, which operate at around 60 Hz. Commercially available cameras can reach 1 million frames per second, which is still not enough. And while a 10 million frames per second camera is available on the market for £200k, that camera cannot capture more than 256 frames (25.6µs of data). We propose a new video camera concept that could reach up to 50 million frames per second, capable of capturing nearly unlimited amounts of frames. By creating this device, we would be able to observe phenomena in biological tissue that no one has been able to observe. Outside of the domain of biomedical engineering, this camera could be used to image plasma in fusion reactors, high-speed objects in space, and other high-speed phenomena that requires incredibly high frame rates. Work: This project requires electrical engineering skills. The students would be asked to make circuits that are connected to a unique sensor array. If the analog circuit is successful, we would then require some digital electronics skillsets and optics (physics). |
| Microfluidic Devices for Engineering Advanced Microparticles for Noninvasive Surgery | Purpose: To develop microfluidic devices to engineer advanced microparticles that can be controlled noninvasively with focused ultrasound devices. Motivation. The vision for noninvasive surgery is to manipulate and probe tissue deep in the body without having to cut open the body. Dr. Choi's laboratory develop noninvasive ultrasound devices that emit and receive sound from the patient's surface. We are working with Dr. Au's laboratory to create particles that our devices could manipulate. Here, we ask the student to develop a microfluidic platform to create advanced microparticles that our noninvasive devices could manipulate. In particular, we would like to design microbubbles to address one of the greatest medical challenges of our time - treating brain disorder. Drugs developed to treat brain disorders, such as Alzheimer's disease are untreatable, not because great drugs aren't available, but because those drugs cannot cross the brain's blood vessels, which is lined by a blood-brain barrier. Using engineered microbubbles remotely controlled by ultrasound, we can open the blood-brain barrier, finally allowing drugs to enter the brain. The work. Build microfluidic devices. This includes working in a cleanroom. You also may be exposed to working with ultrasound devices, so strength in engineering and physics would be helpful. |
| Investigating the bioeffects of single domain antibodies in a 5xFAD mouse model | |
| Liquid biopsies following ultrasound mediated mediated blood brain barrier opening | Liquid biopsies following ultrasound mediated mediated blood brain barrier opening We are developing a noninvasive and localised method of altering the blood-brain barrier permeability. We would like to evaluate whether this technique can be used to assess the contents of the locally probed regions — whether the blood-brain barrier was indeed opened, or what diseases may be present in those regions. We are looking for someone with biological skillsets. This may involve analysing blood samples or staining animal brain slices. |
| Visualising Sound - A Neural Network for Passive Acoustic Beamforming | Visualising Sound - A Neural Network for Passive Acoustic Beamforming |
| An Ultra-High-Speed Depth Camera | We propose an ultra-high-speed depth camera. This camera would be able to not only capture optical images, but also see how far away those objects are. What makes this camera unique is that it would be ultra-fast. It would be able to track depth of very fast moving objects; or track depth when the camera is on a very fast moving object (eg, a drone). The camera would be very versatile, being able to track people, survey lands, and more. There is also the potential for this camera to be adapted for medical imaging. We are still in the very early stages of sensor development. The first task would be to work with a single optical sensor and then progressively work to a 2 by 2 array, 4 by 4 array and so forth. Required skillset: Electronics Engineering. You'll need a basic understanding of how to build analogue circuits. |
Profile: https://profiles.imperial.ac.uk/m.boutelle
Contact details: m.boutelle@imperial.ac.uk
| Project title | Description |
| Real-time monitoring of traumatic brain injury patients | Background: Were funded by the Wellcome Trust / Department of Health to build a clinical instrument to monitor traumatic brain injury patients during the 5 day stay they have in the intensive care unit. We monitor brain pressure, brain electrical activity and levels of metabolic markers such as potassium, glucose and lactate to understand the state of the injured brain tissue. We have now build a prototype instrument and will be using it in the intensive care unit of King's College Hospital. Aim: . To Analyse the data for new patterns of changes indicative of 'secondary insults to the brain'. Possibility to work alongside our monitoring team in the intensive care unit to collect this vital data. Methods: You will analyse this clinical data to find new patterns of changes across the measured variables indicating transient worsening of the brain tissue state. If hospital access is possible you will have the opportunity to learn how to operate the new clinical instrument in a clinical environment. This will include characterising microfluidic biosensors. You will then work with our programmers to embed this as an adverse 'event' in our data analysis software. |
| Wearable sensors for detection of ALS | This project iis an extension of an EPSRC funded project to detect the progression of ALS (motor neurone disease) in patients. It is a Collaboration between the Boutelle group, the Drakakis Group and Prof Chris Shaw (Maurice Wahl Clinical Research Centre, King;s College Hospital). ALS is a devastating disease that is characterised by rapid deterioration in motor function leading typically to death within a few years of diagnosis. Development of therapies is hampered by the lack of a reliable method of determining the progression of the disease. Typically clinical function assessments are used, but the scale is too corse and too subjective to allow evaluation of drug therapies that might for example slow the rate of disease progression. We have been following a different approach, where we use multiple skin contact to record EMG's from pairs of major muscle groups in the arms or legs of a patient. We are looking for complex fasciculation that seem to be characteristic of ALS. There are two stands of the project; the first is to develop wearable clothing that can reliably hold the contact onto the skin to allow longer recording to be made outside of the clinical consultation. The second is to develop pattern recognition algorithms to group ALS fasciculation potentials to allow processing of large volumes of data. this is particularly important as the project now aims for patient self monitoring at home. The project will be supported by MGB group and MRC Fellow James Bashford at KCH. |
| Tracking neuronal injury in the human brain - using direct cortical responses (DCR) | This project comes is a variant of MGB 1911. Both come from a long term collaboration with Prof Anthony Strong and his team at King;s College Hospital NHS Trust. In the past we have demonstrated the importance of Spreading depolarisations (SDs)in the development of secondary brain injury in patients who have had a severe traumatic brain injury. This project involves working with the field potential data streams we obtain from out patients. The distinction between this project and MGB1911 is that in this project work focusses on what are called direct cortical responses (DCR). If we send out a single stimulus pulse of sufficient power it will cause neighbouring neurones to respond by firing back once. the power required reflects the resting state of those neurons. In the brain of a TBI patient this is perhaps the only way we can get an unbiased measurement of how destabilised this brain tissue is by the injury. Preliminary data indicates that we can track the level of tissue injury in real-time whist patients are in a drug induced coma. The project has the possibility of visits to the Hospital (if allowed) to see the clinical team together with working with project members in MGB group . In particular Sharon Jewel, a n MRC training fellowt from the MGB group and expert in neurophysiology. The aim will be to work with Sharon and a state of the art human neurophysiological instrument (Neurolinx), and to monitor DCR events and see how are changed by interaction with the pathophysiology in the injured human brain. The work will be supported by Boutelle group MRC fellow Sharon Jewel |
| Real-time detection of exposure to acetylcholine esterase inhibitors such as pesticides | This project comes from an Government agency sponsored PhD project within the MGB group. Many toxic pesticides are toxic via their effects on acetylcholine esterase. This prevents the enzyme from breaking down acetylcholine, the main neurotransmitter at the nerve muscular junction. At high doses the leaves all muscles (eg heart, diaphragm) in permenant contraction, resultiung is death. A low doses the effect is more subtle, often giving very non specific symptoms of feeling unwell. This project is to develop a blood test that will enable low exposure of such agents to be detected. Our approach is to use microdialysis sampling of the blood coupled to microfluidic biosensors to detect the free levels of acetylcholine esterase associated with the haemoglobin of the blood. The project will work on optimising this microfludic biosensor system in vitro (ie not requiring blood). Students will learn how to make microfluidic devices and use computer controlled micrfluidic systems to build biosensors. The project will be supported by PhD student Georgia Smith |
| Medical biosensors for human tissue with integrated calibration microfluidics | We have been designing a new class of biosensor for use in human tissue in a project in collaboration with the Stevens group and Paulina anekeve (MIT) In addition to micro electrode sensing elements these devices also incorporate microfluidic channels to allow (a) in tissue calibration (b) local administration of stimulating or blocking chemicals to the tissue (see reference at end for idea from a proof of concept device). This project takes these idea to develop the device concept further, establishing new methods of operation in tissue, and new device forms for different types of tissue or organ monitoring. There is also the potential within this project for another student who is interested in modelling to use FEA of the concentration profiles of these devices to optimise designs. The project will be supported by the MGB group including PDRA Dr Sally Gowers paper:Booth MA, Gowers SAN, Hersey M, Samper IC, Park S, Anikeeva P, Hashemi P, Stevens MM, Boutelle MG. 2021. Fiber-Based Electrochemical Biosensors for Monitoring pH and Transient Neurometabolic Lactate. Analytical Chemistry 93: 6646-55 Selected as Journal Cover. |
| Imaging ions in solution using an ISFET array ion concentration detector. | We have developed in collaboration with Prof Pantellis Georgiou (EEE) a platform that can 'image' ion concentrations at the surface of a CMOS fabricated detector. Each device has more than 2500 sensing pixels in an array (2 x2 mm total area). The surface of each pixel is effectively the gate of an FET transistor, so responds to the surface change. As manufactured such devices are inherently selective to pH. However, if we coat with an ion selective polymer membrane we can make the pixels sensitive to the concentrations of different ions in in solution - making an Ion Selective Electrode (ISE). Hence across an array we can detect the main ions present in physiological fluids or tissue extracellular fluids - this is particularly useful when studying injury processes in the human brain or human kidney. In this lab-based project you will learn how to use the ISFET array and then characterise the spatial selectivity of the device. This will involve using a new Bioplotter facility (BioDOT Omnia - https://www.biodot.com/technology ) just delivered to the department. You will use the device to precisely control when polymer is dispensed, then see how it responds to small additions of test sample. the project will be supported by MGB Group PHD student Chiara Cicatiello Referance about prototype device: 1. Moser N, Leong CL, Hu Y, Cicatiello C, Gowers S, Boutelle M, Georgiou P. 2020. Complementary Metal–Oxide–Semiconductor Potentiometric Field-Effect Transistor Array Platform Using Sensor Learning for Multi-ion Imaging. Analytical Chemistry 92: 5276-85 |
| Engineering a new class of chemical sensor for human tissue monitoring using FET arrays | We have developed in collaboration with Prof Pantellis Georgiou (EEE) a platform that can 'image' concentrations the solution at the surface of a CMOS fabricated detector. Each device has more than 2500 sensing pixels in an array (2 x2 mm total area). The surface of each pixel is effectively the gate of an FET transistor, so responds to the surface charge. As manufactured such devices are inherently selective to pH. However, we are investigating how we can change this chemical selectivity by immobilising a specific recognition sites such as an antibody or aptamers to the sensing surface. Specific target include inflammatory cytokines such as IL6w which are raised in injured tissue, including the brain. In this lab-based project you will learn how to use the ISFET array and learn how to immobilise antibodies onto a surface. The surface can then be characterised using XPS, a fluorescent microscope and FTIR spectroscopy. One the device is optimised you will characterise is performance in biologically relevant fluids. the project will be supported by MGB Group PHD student Chiara Cicatiello Referance about prototype device: 1. Moser N, Leong CL, Hu Y, Cicatiello C, Gowers S, Boutelle M, Georgiou P. 2020. Complementary Metal–Oxide–Semiconductor Potentiometric Field-Effect Transistor Array Platform Using Sensor Learning for Multi-ion Imaging. Analytical Chemistry 92: 5276-85 |
| Real-time neurochemical monitoring of Traumatic Brain injury patients | This project is based on a long collaboration with Professor Anthony Strong from King's College Hospital. We have been using multimodal monitoring patients in the intensive care unit who have sever traumatic brain injury. We have used monitoring of the brain electrophysiology (see other projects) and real-time neurochemical monitoring using the sampling technique microdialysis. My group has develop many microfluidic based system to enable this monitoring, and we have shown that monitoring of brain energy metabolism via glucose and lactate measurement gives vital information as to how the injured brain copes with the dynamic events such as spreading depolarisations that are responsible for much of the secondary brain injury that occurs during the 5-10 days patients are in the intensive care unit. This project is to evaluate a new clinical instrument the Loke system by M Dialysis. (https://www.mdialysis.com/product/md-system-1-0-loke/) As world leaders in this field we are being provided with 2 Loke systems . The student will initially characterise the performance of the Lake system using the equipment based in the Boutelle group lab. They will then support the use of these systems to monitor patients in the intensive care unit at KCH. This project will be supported by the Boutelle group and MRC Clinical Fellow Sharon Jewell. Paper: Rogers ML, Leong C, Gowers S, Samper I, Jewell SL, Khan A, McCarthy L, Pahl C, Tolias CM, Walsh DC, Strong AJ, Boutelle MG. 2017. Simultaneous monitoring of potassium, glucose and lactate during spreading depolarisation in the injured human brain – proof of principle of a novel real-time neurochemical analysis system, continuous online microdialysis (coMD). J Cerebral Blood Flow and Metab 37: 1883 - 95 |
| Development of software to determine human neural excitability stimulus response curve | In our collaboration with King's College Hospital (Denmark) we have specialised in developing methods to monitor the injured human brain. We monitor patients in the Intensive Care unit who have severe Traumatic brain Injury (TBI). We have previously shown, using electrodes place on the rain surface during surgery, that intense waves of brain activity called a spreading depolarisations (SDs) are an important cause of secondary brain injury. Recently we have develop a Direct Cortical Response methodology . Here a controlled stimulus pulse is used to find information about the resting potential of the surrounding neurons. This has great potential for widespread use to help patients. This project is to develop software (program or an APP) to take the raw stimulus response data from the Neurolinks high performance recording system we use and convert into an excitability curve (which tells the user what stimulus strength to select for recordings in that patient.). This is a design build and test project which can start with saved data and/or stimulus data recorded from peripheral nerves. there is the opportunity to test the final software on the neurolinx instrument for patient monitoring. the project is supported by the MGB group including MRC fellow Dr Sharon Jewell |
Profile: https://profiles.imperial.ac.uk/mengxing.tang
Contact details: mengxing.tang@imperial.ac.uk
| Project title | Description |
| Quantitative analysis of microvascular images | Background/Rationale Microvascular flow is closely related to tissue function and pathology including cancer and cardiovascular disease. For example, as tumors grow, they eventually outgrow their blood supply and angiogenesis, the formation of new blood vessels, is induced. The resulting vasculature can be different from those in healthy tissue. Therefore, the microvasculature can offer valuable information for detection and diagnosis of pathology. Imaging techniques to gather information on the microvasculature and to characterize a lesion as benign or malignant includes dynamic contrast-enhanced MRI, contrast enhanced CT, and ultrasound. However, they lack the sensitivity and resolution for imaging microcirculation. Recet advance in super-resolution ultrasound offers very high resolution imaging (tens of microns) of microvasculature in human, which is also a more affordable bed side technology. Aims/Objectives: To develop image analysis algorithms and an analysis software to quantify a range of features in acquired the microvascular images. Example tasks include segmentation of vessels from the image, and calculating features of these vascular geometry including e.g. size, torturosity will be calculated. Experiences: signal/image processing and programming skills |
| Mapping the Speed of Sound in Tissue: A New Approach Using Passive “Active Crossing Ultrasound Imaging | Ultrasound is a powerful and widely used imaging modality, offering real-time, non-invasive views inside the body. Conventional ultrasound focuses on differences in how tissues reflect sound, but this only tells part of the story. Another, often overlooked parameter is the speed of sound within tissue, which varies with structure, composition, and pathology. Measuring this directly could unlock an entirely new dimension of diagnostic imaging. This project explores a novel method to estimate local tissue sound speed using a combination of passive and active ultrasound imaging. These two modes interact with tissue differently: in passive beamforming, point sources (like microbubbles) appear farther away at higher sound speeds, while in active imaging, the same sources appear closer. By identifying the speed of sound at which a point aligns in both modes, we can estimate the true speed of sound at that location. We hypothesise that by introducing sparse microbubbles into the field of view and imaging them with both active and passive techniques, a full speed-of-sound map of the tissue can be reconstructed. This map could reveal subtle changes in tissue composition, with potential applications in cancer detection, fibrosis assessment, and beyond. The student will simulate this approach, develop algorithms to generate these speed of sound maps, and validate the method experimentally. This project spans signal processing, simulations, and hands-on lab work. Opinging the door to a fundamentally new imaging capability. |
| Learning based blood flow estimation in the brain using ultrasound | Background Accurate estimation of cerebral blood flow velocity is critical for diagnosing cerebrovascular conditions. Transcranial doppler (TCD) uses ultrasound waves to measure cerebral blood flow velocity in the major intracranial arteries, which can be an indicator of increased stroke risk for children with sickle cell disease [1]. TCD provides blood flow spectra based on a single element transducer or color-coded blood flow based on a phase/linear probe. Both requires three-dimensional knowledge of cerebrovascular anatomy and its variations to localise targeted arteries as well as precise probe positioning for tracking/monitoring. Besides, the interpretation of the blood flow spectra for diagnosis and treatment demands professional training and practical experience, making the TCD study highly operator dependent. Recently, volumetric ultrasound imaging has been developed and explored for transcranial blood flow imaging [2], enabling comprehensive visualisation of intracranial arteries with minimised manual adjustments. In this project, a deep learning framework is proposed to directly estimate blood flow velocity from volumetric doppler ultrasound data (in human). Aims and methods: A deep learning model will be trained to estimate blood flow velocity using doppler signals. Different data preprocessing including IQ, phase, and radiofrequency will be explored. The proposed method will be compared against conventional doppler processing (FFT-based and autocorrelation methods). The proposed framework will be validated using simulation, flow experiments, and in vivo human volunteer data. Skills: Knowledge of ultrasound simulation software such as Field II and k-Wave; practical experience of deep learning (Keras, PyTorch) is preferable but not mandatory References: [1] R. Adams et al., The Use of Transcranial Ultrasonography to Predict Stroke in Sickle Cell Disease, New England Journal of Medicine, vol. 326, no. 9, pp. 605“610, Feb. 1992, doi: 10.1056/NEJM199202273260905. [2] P. Xing et al., 3D ultrasound localization microscopy of the nonhuman primate brain,eBioMedicine, vol. 111, Jan. 2025, doi: 10.1016/j.ebiom.2024.105457. |
| Robotic-assisted multi-view ultrasound for volumetric blood flow imaging in human brain | Background Transcranial blood flow imaging using ultrasound such as transcranial doppler is a non-invasive technique for cerebrovascular diseases diagnosis and monitoring. Conventional TCD devices provide either 1D blood flow spectra or 2D slices which contains limited spatial information and could be highly operator-dependence [3]. 3D ultrasonic imaging has been developed through i.g., the mechanical scanning of 2D probe, matrix array, and multi-element array, enabling the comprehensive visualisation of tissue structures [4]. In this work, we propose a robotic-assisted multi-view ultrasound imaging method aiming for improved imaging quality and larger field of view by combing data from multiple views using advance computational methods. Aims and methods: A robotic-assisted multi-view ultrasound imaging system will be developed, incorporating a robotic arm with an integrated control system to enable transcranial ultrasound scanning from multiple angles. Meanwhile, image reconstruction and enhancement algorithms will be developed to generate volumetric transcranial blood flow images at high quality. the performance of the proposed imaging system will be validated using phantoms, ex vivo skull models and in vivo healthy human volunteers. Skills Knowledge of robotics and ultrasound is preferable not mandatory. References: [3] J. Naqvi, K. H. Yap, G. Ahmad, and J. Ghosh, Transcranial Doppler Ultrasound: A Review of the Physical Principles and Major Applications in Critical Care, International Journal of Vascular Medicine, vol. 2013, no. 1, p. 629378, 2013, doi: 10.1155/2013/629378. [4] H. Favre, Transcranial 3D ultrasound localization microscopy using a large element matrix array with a multi-lens diffracting layer: an in vitro study, Phys. Med. Biol., 2023. |
| Acoustoelectric Imaging of the heart “ A Pilot Project | Background: Ventricular tachycardias (VT) are life-threatening arrhythmias that may manifest with syncope, cardiogenic shock and/or cardiac arrest. Catheter ablation for VT has emerged as an important complementary treatment option to reduce VT recurrences and ICD shocks. Invasive electro-anatomical contact mapping is considered the gold standard to define ablation targets, yet several fundamental limitations exist with current approaches notably the restriction to surface measurements alone. This implies that substantial parts of the ventricular myocardium (=the intramural space) cannot be accessed and eludes an assessment in the procedure. Novel mapping technologies are needed to allow the clinicians to assess the arrhythmogenic substrate in its entirety including the full thickness of the myocardial wall. This would offer a more precise identification of ablation targets to prevent recurrences of ventricular arrhythmias. Cardiac Acoustoelectrical Imaging (AEI) is a technology that exploits the interaction of an ultrasonic pressure wave and the resistivity of tissue to map current densities. AEI allows for in-vivo mapping and characterisation of these biological current densities beyond the tissue surface and with high spatio-temporal resolution. Only limited preclinical research for its application for cardiac mapping is available. The ability to map transmurally across all layers of the myocardium distinguishes it from other available approaches and would address one of the most fundamental limitations in contemporary clinical cardiac mapping. Aim: In this pilot project we aim to evaluate if cardiac AEI allows to record transmural electrical currents with high temporal and spatial resolution and can differentiate between endo-, midmyocardial and epicardial impulse origins. This is the first cardiac AEI project at this institution and involves establishing a new setup for cardiac AEI, test and optimise the workflow in an ex vivo phantom model and then langendorff perfused heart model to establish feasibility, review safety of ultrasound parameters and assess practicality for 2D and 3D mapping. If confirmed, in a second step spatial accuracy for transmural mapping and differentiating normal from abnormal propagation characteristics will be evaluated using intramural plunge needles as validation to gather pilot data for a subsequent translational project. In this pilot study an experimental rig will be set up to enable AEI measurement, which including acoustic and electrical sensors working in a water tank, and generate some initial pilot data for validate the feasibiliy of the technology in a simplified lab environment. Ultrasound engineers should ideally have a previous experience / basic understanding of acoustoelectric imaging. Knowledge of cardiac electrophysiology and mapping is desirable. This is a joined project working with a clinical cardiac electrophysiologists. |
Profile: https://profiles.imperial.ac.uk/a.faisal
Contact details: a.faisal@imperial.ac.uk
| Project title | Description |
| A Human-AI Collaborative Framework for Enhanced BCI Training | Brain-computer interfaces (BCIs) represent the cutting-edge fusion of neuroscience and computer engineering and offer promising unparalleled control of machines directly by human thoughts [1]. A cornerstone of this technology is the synergy between the user's brain and the decoding machine [1,2,3,4]. While considerable progress has been made in refining machine decoders, guiding the brain to generate optimal signals for these decoders remains a challenge [4]. In an optimal setting, the brain learns to produce signals, and the machine learns to interpret them. Establishing a training framework that facilitates this mutual learning process is pivotal for effective BCI control [4]. ims and Objectives: The central goal of this project is to develop a Human-AI Joint Training Framework tailored for enhancing BCI training. The project's objectives include: 1. Collection and analysis of brain data from participants to establish a rich dataset for developing and testing the framework. 2. Designing a joint training framework that promotes a symbiotic relationship between the human user's brain signals and the decoder. 3. Crafting innovative feedback methods that guide human participants in generating brain signals that align with the optimal distribution estimated by the decoder. 4. Evaluating the efficacy of the joint training framework in real-world scenarios, ensuring both the user and machine benefit from the mutual training process. 5. Adapting the framework to cater to individual differences, ensuring broad applicability and user-specific optimization. Skills Required for the Project: 1. Human Interaction and Ethical Considerations: Skills in participant interaction, ensuring ethical data collection, and maintaining participant well-being throughout the process. 2. Programming: Expertise in Python and signal processing libraries such as MNE. Familiar with UI design. 3. Machine Learning & AI: Familiarity with developing machine decoders for BCIs, alongside knowledge of feedback loop mechanisms in AI systems. 4. Neuroscience and Data Collection: Experience in EEG or other neural data collection techniques, alongside a good understanding of neuroscience principles guiding brain signal generation. [1] Millán, J. D. R. (2015). Brain-machine interfaces: the perception-action closed loop: a two-learner system. IEEE Systems, Man, and Cybernetics Magazine, 1(1), 6-8. [2] Perdikis, S., & Millan, J. D. R. (2020). Brain-machine interfaces: a tale of two learners. IEEE Systems, Man, and Cybernetics Magazine, 6(3), 12-19. [3] Vidaurre, C., Sannelli, C., Müller, K. R., & Blankertz, B. (2011). Co-adaptive calibration to improve BCI efficiency. Journal of neural engineering, 8(2), 025009. [4] Wang, H., Qi, Y., Yao, L., Wang, Y., Farina, D., & Pan, G. (2023). A Human–Machine Joint Learning Framework to Boost Endogenous BCI Training. IEEE Transactions on Neural Networks and Learning Systems. Please contact Jinpei Han <j.han20@imperial.ac.uk> for day-to-day project running questions. If you are interested in the project please sign-up here, we will contact you then about project meetings: https://docs.google.com/spreadsheets/d/1KID0MMebuOAbl_WztvlB7nyzFLAfMIUSmP5ItZl2TvQ/edit?usp=sharing |
| Active adjustment of exoskeleton movements towards Healthy Human gait using offline reinforcement learning |
This research project is aimed at developing machine learning methods for robotic (ultimately exoskeleton control). and its focus on investigating the feasibility of employing an offline reinforcement learning (OffRL) approach for neuromuscular gait modelling, which could be particularly beneficial in rehabilitation. By optimizing the reward policies, we will try to train the model to establish sensory-motor mappings (control policy), enabling it to generate human-like walking patterns. The training process incorporated essential factors such as human motion capture data, muscle activation patterns, and metabolic cost estimation within the reward function. Our goal is to demonstrate the model's ability to faithfully reproduce human kinematics and ground reaction forces during walking and generate human-like walking behavior at different speeds, with a focus on improving walking movements in neurological patients during rehabilitation. Please signup here https://docs.google.com/spreadsheets/d/1KID0MMebuOAbl_WztvlB7nyzFLAfMIUSmP5ItZl2TvQ/edit?usp=sharing and if you are interested in the project, please contact Dr. Jyotindra Narayan at jnarayan@ic.ac.uk |
| CYBATHLON: Self-driving Wheelchair control for Assistive Robot Race. | 0The Cybathlon (https://cybathlon.ethz.ch/en/event/disciplines/rob) is a unique competition where individuals with physical disabilities compete in various tasks using advanced assistive devices and technologies. It's designed not only as a competition but also as a platform for developing technologies that can be used in everyday life to assist people with disabilities. These bionic olympics games happen only every four year, now our lab - Team Imperial (https://www.imperial.ac.uk/engineering/news-and-events/cybathlon/team-imperial/) has repeatedly won medals against 65 competing teams from 5 continents wants to aprticpate again. Core to our experience is that we work with end-users and students form day one. This project aims to innovate and refine the current gaze-controlled wheelchair controls [1, 2], aligning them for seamless integration with the existing gaze-controlled assistive robotic arm controls [3,4]. Your primary task will be to evaluate and enhance the capability of simultaneously managing both the wheelchair and the robotic arm through gaze control. This involves a detailed assessment of the combined system's feasibility, followed by necessary adjustments to ensure that the controls are intuitive, responsive, and efficient. In this endeavor, you'll be working closely with other members of Team Imperial, leveraging collective expertise to achieve a harmonious and effective integration of these two advanced assistive technologies. Please sign up here https://docs.google.com/spreadsheets/d/1KID0MMebuOAbl_WztvlB7nyzFLAfMIUSmP5ItZl2TvQ/edit?usp=sharing thereafter you can contact Dr. Bukeikhan Omarali b.omarali@imperial.ac.uk for a pre-chat [1] M. Subramanian, N. Songur, D. Adjei, P. Orlov and A. A. Faisal, "A.Eye Drive: Gaze-based semi-autonomous wheelchair interface," 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 2019, pp. 5967-5970, doi: 10.1109/EMBC.2019.8856608. [2] M. Subramanian, S. Park, P. Orlov, A. Shafti and A. A. Faisal, "Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform," 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Italy, 2021, pp. 335-338, doi: 10.1109/NER49283.2021.9441218. [3] A. Shafti and A. A. Faisal, "Non-invasive Cognitive-level Human Interfacing for the Robotic Restoration of Reaching & Grasping," 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Italy, 2021, pp. 872-875, doi: 10.1109/NER49283.2021.9441453. [4] A. Shafti, P. Orlov and A. A. Faisal, "Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 863-869, doi: 10.1109/ICRA.2019.8793804. |
CYBATHLON: Fusing Robotic restoration of reaching & grasping with Autonomous Wheelchair Systems |
The Cybathlon (https://cybathlon.ethz.ch/en/event/disciplines/rob) is a unique competition where individuals with physical disabilities compete in various tasks using advanced assistive devices and technologies. It's designed not only as a competition but also as a platform for developing technologies that can be used in everyday life to assist people with disabilities. These bionic olympics games happen only every four year, now our lab - Team Imperial (https://www.imperial.ac.uk/engineering/news-and-events/cybathlon/team-imperial/) has repeatedly won medals against 65 competing teams from 5 continents wants to aprticpate again. Core to our experience is that we work with end-users and students form day one. This project centers on the hardware integration of an existing gaze-controlled robotic arm with a gaze-controlled wheelchair, based on insights from previous studies [1-4]. As part of this task, you will take on several interconnected responsibilities. Initially, you'll design and assemble a mechanical mount to attach the robotic arm securely to the wheelchair. This involves ensuring the mount is both robust and flexible, providing ease of movement and stability for the arm. Additionally, you'll work on integrating the electronic systems of the wheelchair with those of the robotic arm. This step is crucial to ensure that the two systems operate in harmony, allowing for smooth and efficient control. Your expertise will be essential in ensuring that the electronic interfaces of both the wheelchair and the robotic arm are seamlessly integrated. Another significant aspect of your role will be to collaborate closely with other members of Team Imperial to merge the control systems of the wheelchair and the robotic arm. This will involve combining the individual control mechanisms into a single, cohesive system that can be operated through gaze control. Please sign up here https://docs.google.com/spreadsheets/d/1KID0MMebuOAbl_WztvlB7nyzFLAfMIUSmP5ItZl2TvQ/edit?usp=sharing thereafter you can contact Dr. Bukeikhan Omarali b.omarali@imperial.ac.uk for a pre-chat |
| CYBATHLON: Gaze control for a Wheelchair-mounted robotic arm for Cybathlon Assitance Robot Race | The Cybathlon (https://cybathlon.ethz.ch/en/event/disciplines/rob) is a unique competition where individuals with physical disabilities compete in various tasks using advanced assistive devices and technologies. It's designed not only as a competition but also as a platform for developing technologies that can be used in everyday life to assist people with disabilities. These bionic olympics games happen only every four year, now our lab who has repeatedly won medals against 65 competing teams from 5 continents wants to aprticpate again. Core to our experience is that we work with end-users and students form day one. The project focuses on enhancing a wheelchair-mounted robotic arm with an improved gaze control interface, inspired by previous research [1-4]. This development aims to adapt the robotic arm, originally designed for kitchen and dining environments, for the specific challenges of the Cybathlon competition, Assistance Robot Race track . The key innovation lies in enabling gaze-controlled operation of the robotic arm to assist individuals with paralysis or partial paralysis. This will involve reimagining the robotic arm's functionalities to perform a variety of tasks, such as opening doors and manipulating objects, which are different from its initial capabilities. The project's foundation is based on significant prior studies in the field, ranging from gaze-based control systems for assistive robotics to cognitive-level human interfacing for robotic restoration of mobility functions. Please sign up here https://docs.google.com/spreadsheets/d/1KID0MMebuOAbl_WztvlB7nyzFLAfMIUSmP5ItZl2TvQ/edit?usp=sharing thereafter you can contact Dr. Bukeikhan Omarali b.omarali@imperial.ac.uk for a pre-chat. [1] A. Shafti, P. Orlov and A. A. Faisal, "Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 863-869, doi: 10.1109/ICRA.2019.8793804. [2] M. Subramanian, N. Songur, D. Adjei, P. Orlov and A. A. Faisal, "A.Eye Drive: Gaze-based semi-autonomous wheelchair interface," 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 2019, pp. 5967-5970, doi: 10.1109/EMBC.2019.8856608. [3] A. Shafti and A. A. Faisal, "Non-invasive Cognitive-level Human Interfacing for the Robotic Restoration of Reaching & Grasping," 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Italy, 2021, pp. 872-875, doi: 10.1109/NER49283.2021.9441453. [4] M. Subramanian, S. Park, P. Orlov, A. Shafti and A. A. Faisal, "Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform," 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), Italy, 2021, pp. 335-338, doi: 10.1109/NER49283.2021.9441218. |
Profile: https://profiles.imperial.ac.uk/e.burdet
Contact details: e.burdet@imperial.ac.uk
| Project title | Description |
| Stroke Rehabilitation using Robotic Exoskeleton and Electrical Stimulation | Exoskeleton robots facilitate high-repetition therapy; however, in isolation they have a limited effect in engaging the neuromuscular system. Exoskeletons integrated with functional electrical stimulation (FES) and controlled via electromyography (EMG) may actively engage the neuromuscular system and prompt use of correct motor synergies. It is hypothesised that combining exoskeletons, FES and EMG in therapy for chronic stroke survivors with hand and wrist impairment will stimulate neuromuscular recovery and enable independence in daily activities, maximizing their reintegration to normal living. Your roles will be to learn how to use the exoskeleton, electrical stimulator and design control system, as well as to help in the development and testing of a rehabilitation training. You will gain first-hand experience working with engineers, therapists and stroke patients to deploy cutting-edge technologies (from HumanRobotix and Tecnalia), and work in signal processing to understand the effects of robotics and electrical stimulation on performance and physiological signals after stroke. For more details, please, contact Lucille (lucille.cazenave16@imperial.ac.uk). |
Profile: https://profiles.imperial.ac.uk/h.g.krapp
Contact details: h.g.krapp@imperial.ac.uk
| Project title | Description |
| Fly-robot interface - neuronal control of biohybrid system | Neurobiological research on the integration of signals obtained from different sensor systems is often limited by the fact that the experimental animals are stationary, i.e. they are fixed to a holder within an experimental setup. The reason for this is that stable electrophysiological recordings from nerve cells are only possible if there is no relative movement between the recording electrode and the nerve cell. While this approach has successfully been used to discover some general principles of sensory information processing, it has two fundamental limitations: (i) under natural conditions the animal simultaneously receives input from virtually all of its sensor systems, and (ii) it generated motor commands (efference copies) which may modify the way in which sensor signals are being processed. To approximate more realistic conditions when monitoring neuronal activities in the fly nervous system the animal will be mounted on a robot that is free to move in the laboratory environment. The action potentials of motion sensitive interneurons in the fly visual system will be recorded and used to control the steering of the robot, establishing a closed-loop fly-robot interface. This configuration will allow the experimenter to selectively disable individual sensor systems to assess their respective impact on multisensory integration in the fly nervous system. This project is dedicated the necessary miniaturization of electrophysiological equipment, e.g. amplifier circuits, so it can be mounted on a mobile robot platform to record the neural signals of identified optic flow processing interneurons. The recorded signals will be filtered and converted into command commands for the robot steering. The resulting feed-back system will be set up in a way that prevents the robot from colliding with any obstacles in its immediate environment. |
| Electrophysiological characterization of optic flow processing interneurons in flying insects. | The visual system in many animals and humans contributes to state change estimations by analysing panoramic retinal image shifts known as optic flow. Earlier studies in blowflies have revealed the underlying neuronal mechanism which is believed to complement state change estimation based on mechanosensory/inertial systems. This project aims to support the idea that optic flow processing in the visual system of flying insects is tuned to control species-specific natural modes of motion which are determined by the animal's flight dynamics. Experimental evidence in support for the mode sensing hypothesis requires a comparative study of the receptive field properties of motion sensitive interneurons in the visual systems of other than dipteran flies, e.g. species belonging to the order of orthopteran, odonata and lepidoptera. Hoverflies, which show distinctly different flight patterns than blowflies and also perform differently in behavioural gaze stabilization experiments, would be an ideal candidate species. Preliminary experiments have shown that hoverflies, too, employ motion sensitive interneurons in their visual system. But only one such interneuron has been studied so far. This project requires the dissection of flying insect species for extracellular recordings upon visual motion stimuli. The neuronal responses will be analysed using customized MATLAB/Python programmes which reveal the cell's receptive field organization. From the electrophysiological results the preference of individually identifiable interneurons for specific self-motion components (state changes) will be derived. |
| Insect Wing Beat Analyzer | To understand how the insect nervous system controls specific tasks such as flight and gaze stabilization we generate experimental stimuli, as input - for instant, visual motion across the animals eyes - and measure the time course of a relevant behavioural parameter. One important behavioural output parameter is the difference between the left and the right wing beat amplitude, delta WB. In the fruitfly Drosophila, it has been shown that delta WB is proportional to the torque the animal generates when following the direction of visual motion to minimize retinal image motion. This project aims to build a so-called 'wingbeat analyzer' which enables the measurement of delta WB. It is based on the difference between the output signals of two photodiods that are differentialy illuminated due to different wing beat amplitudes either side of the animal. The wing beat analyzer will enable simultaneous measurements of a behavioural output parameter, delta WB, and the neuronal signals contributing to its control. Ultimately, the project is relevant for the development of novel bio-inspired control architectures applied to autonomous robotics. |
| 3d head rotations in blowflies | This desk-based project aims to retrieve fly 3D head rotations from 2D high speed videography data. By combining Image analysis methods with 3D model fitting techniques, it studies the correlation of head roll, pitch, and yaw rotations induced by visual and inertial stimuli to understand the structural organization and control of the fly neck motor system. The results are relevant for the development of image stabilization systems mounted on small autonomous robotic platforms. |
| Neuronal integration of polarised light and motion vision in flies and butterflies | Many animals, including flies and butterflies, exploit patterns of linearly polarized light to guide tasks as diverse as navigation, water seeking, mate recognition, communication, and host detection. Polarized light cues in natural scenes follow distinct spatiotemporal patterns, and visual systems have consequently evolved to process polarization cues within this dynamic context. Whilst polarization cues for communication and host detection are also thought to be processed as structured spatiotemporal signals, the mechanisms by which polarization information is integrated with object detection and motion vision pathways in these contexts is less well understood. This project aims to measure neuronal responses of motion sensitive interneurons to moving patterns of polarised light in flies and butterflies. The project will involve mounting and dissecting insects for extracellular electrophysiology, collecting wild flies and butterflies in the summer, and data analysis in Matlab. Programming experience is useful but can be learned as part of the project. |
| Closed-loop polarotaxis in flying insects | Many insects, including flies and butterflies, orient themselves relative to the pattern of polarised skylight. This is thought to underly a form of navigation or spatial orientation, termed polarotaxis. Previous results in the lab suggest that flies are more likely to orient to a polarised light pattern when they are free to rotate, rather than when tethered in place on a wing beat analyser that enables the measurement of the animal’s intended turns. This difference may arise due to the absence of visual feedback when tethered in place and can be tested by closing the loop such that the animal’s movement is fed back to update the polariser position. The aim of this project is to compare polarotaxis in tethered flying insects, including flies and butterflies, both under open- and closed-loop conditions. Methods will include image analysis for pose estimation (including neural network training in Python using DeepLabCut), programming stepper motors using Arduino, and data analysis in Matlab or Python. Familiarity with programming is helpful but could be learned at the start of the project. |
| Electrophysiological characterization of forward model adaptation in walking flies | A stable and accurate perception of the world ultimately demands a distinction between the sensory consequences of self-motion, and those arising from external changes – a failure to do so would result in a futile cycle of responding to our own motion. Recent neurophysiological evidence suggests that flies anticipate the sensory consequences of their actions (a so-called forward model) to prevent stabilization reflexes from occurring in response to volitional movements. In theory, this forward model must adapt to account for variable neuronal delays and changes in the sensory environment. This project will test this hypothesis by measuring the electrophysiological responses of motion sensitive interneurons in the fly brain whilst the fly is walking within a closed-loop virtual reality platform with control over the latency of sensory feedback. This project will involve dissecting flies to access the brain whilst they walk on a spherical treadmill; extracellular electrophysiology of motion sensitive neurons during walking; data analysis in Matlab or Python. Programming experience is useful but can be learned as part of the project. |
Profile: https://profiles.imperial.ac.uk/e.drakakis
Contact details: e.drakakis@imperial.ac.uk
| Project title | Description |
| Reciprocity properties of networks of memristors | MATHEMATICAL (MATLAB) PROJECT - Memristors are novel nano-elements theorised in 1976 but only recently (2008) fabricated by Hewlett Packard Labs. Qualitatively speaking memristors' operation and dynamics resemble naturally encountered synapses. The fabrication though of memristors is still in its infancy. The project investigates the reciprocity properties of networks of memristors and how/whether these can be reliably used in encryption. The student will investigate in detail the reciprocity properties of simple networks of ideal identical memristors and, ideally, the use of such networks for the encryption of biomedical signals. Ideally the student will be using Mathematica code but Matlab is also acceptable. Intended for a good YR4 student; or for MSc students with a strong EEE degree. (Keywords: mathematica/matlab/simulations) |
| Revisiting low-power CMOS Hodgkin-Huxley dynamics realisation | TRANSISTOR LEVEL/MATHEMATICAL PROJECT -Tough non-linear transistor-level design focusing on the celebrated and Nobel-prize winning Hodgkin-Huxley dynamics. Do not choose this project unless you love realising Matlab by means of transistors! This project is based on rich previous work and can include both transistor-level design and matlab. It provides the student with the opportunity to master transistor-level design and to familiarise oneself with the use of industrial level software. The scope of the project is to unlock the exact synthesis of the Hodgkin-Huxley dynamics by revisiting both the transistor-level design and the mathematical approximations of the original non-linear dynamics.The project is intended for a student who has excelled in EE-1, EE-2, Bioinstrumentation and Signals & Systems courses; or for MRes students with a strong EEE degree. (Keywords: mathematics/transistors/cadence) |
| Study and Simulation of a full Cochlear Implant Processor and Stimulator | This is a transistor-level project which allows the student to master Cadence. You will be given certain already designed blocks and you will be asked to put together a full cochlear processor and stimulator. You will understand transistor-level design choices and trade-offs, architectural level constraints and you will be testing/evaluating the architecture that you designed by importing and processing in cadence different audio files. A project relying heavily on simulation. At the end of this project you will be known for your Cadence skills. |
| 4R Biosensor Longevity Strategy: An Electronics-Enabled Strategy for the Acquisition ofLong-Term High-Quality Electrochemical (Bio)Sensor Data | PRACTICAL ELECTRONICS/PROJECT: Electrochemical sensors and biosensors are powerful tools for in-situ monitoring.On the other hand long term operation of sensors in high cell density/high protein environments is beset by problems collectively referred to as biofouling. They originate by the absorption of proteins and cells to the sensor surface and results in the build-up of a layer that reduces mass transport rates, causes changes in the local environment and passivates the sensor surface. In operational use this is experienced as falling sensor sensitivity which ultimately limits longevity.Whilst there are strategies in the literature that mitigate against these effects none offer sufficiently robust or long-lived solutions. This project aims at investigating/realising a solution which can be codified as Recalibrate, Regenerate, Reconfigure and Replace (4R strategy) miniaturised biosensors. The 4R longevity strategy is realisable my means of appropriate tailor-made electronics. |
| An Energy efficient F0 estimator | (A software desk based project (100% MATLAB)) At the cutting edge of Cochlear Implant research is a massive ongoing effort in improving their effectiveness. Several Stimulation Strategies (SS) are employed to convey relevant information to individuals with hearing loss. In traditional SS (such as CIS and CA) users find it difficult to understand and appreciate tonal based languages and speech in noise. There exist promising results showing that speech formants (especially F0) provide considerable comprehension to tonal based languages by including the F0 contours in the SS. In this project you are to compare current existing methods of F0 extraction (e.g. Autocorrelation, cepstrum ..) evaluate their energy efficiency as well as its hardware implementation feasibility, you are also encouraged to propose a novel technique in estimating/extracting F0 from signals we can currently extract (Instentaneous frequency, envelope slow/fast). Project route: 1. Survey existing F0 algorithms 2. Implement the algorithms in MATLAB /SIMULINK 3. Compare their accuracy/complexity/efficiency (Major Milestone) 4. Either: a. shortlist algorithms implementable in hardware (This is a safety net) b. propose a new/hybrid method prioritising efficiency/complexity 5. Extra: if student has enough time left to propose hardware blocks for said implementation Advantages: -From the student perspective this project is very achievable in software(MATLAB) -Gives a taste of research-based projects -There are areas where a student can express creativity (proposing a hybrid F0 estimator) -This is a mostly signal processing project (many students take advanced signal processing in other Depts) -There are safety nets in the project if the student does not manage to propose anything new as they would be able to achieve a thorough comparison of existing F0 estimators. |
| Investigation and implementation of novel artefact suppression methodologies in bidirectional electrophysiological interfaces. | Deep Brain Stimulation involves both stimulating by means of strong pulses and recording very weak brain signals while stimulating. However, the strong stimulating pulses generate (stimulation) artefacts which contaminate or even destroy the recording of the weak signals. You will be involved in the design of innovative analogue, digital, or mixed-mode approaches for achieving real-time artefact-free biopotential recording, directly from the stimulation site. Experimental testing results using different electrode arrays, along with head-to-head comparison results (with existing artefact elimination techniques) are expected to be delivered. The project is suitable for students who wish to master (micro)electronics by testing their ability to design and test very high-performance circuits. Publication possible. |
| Design and delivery of a miniaturised, wearable, and high channel-count biopotential acquisition device. | You will be involved in the design of multichannel and high-performance prototype devices that have the potential to be used in a noisy clinical environment. Design trade-offs have to be taken into consideration and an optimal solution must be identified for the successful delivery of a noise-robust instrument that can provide low-noise recordings from multiple neuronal targets. You will gain experience of the full design cycle of a device and you will master PCB design. Such skills are snatched by well-paying companies/employers. |
| Design and delivery of a complete stimulation and recording system for DBS and SCS applications. (DBS=Deep Brain Stimulation; SCS=Spinal Cord Stimulation) |
You will be involved in the design of a state-of-the-art instrument that aims at providing both stimulation and recording capabilities from neural targets within the human brain and spinal cord. Various artefact suppression methodologies that require timing indicators from the stimulator, along with their real-time capabilities, will be investigated. A demanding project leading at the mastering of PCB design, stimulation topologies and recording limitations. A project suitable for ambitious candidates aiming at conducting research later or being employed in the field of medical devices. Do not take this project unless you enjoy the thrill and the challenge of producing a useful and robust instrument. |
| Micro-instrument for Traumatic Brain Injury (TBI) Monitoring | Design and realisation of a micro-instrument for the neuroelectrochemical monitoring of the injured human brain. |
| Modular Open platform for novel ultrasound imaging techniques | The development and commercialization of novel ultrasound device require research instruments that allow the user full control of the transmission sequences and access to the recorded echoes at the different stages of the receiver pipeline. You will be involved in the design of a modular ultrasound instrument for low-cost ultrasound imaging. The instrument is being used to develop novel diagnostic tools. This is a demanding project for those who wish to have a career in the medical device field. You will be designing and testing modules for a low-cost imager. The project carries humanitarian value. SKILLS: PCB design, FPGAs. The ambitious student who will thrive in this project will be ready for both research and industry. |
| Stimulation artefact suppression during closed-loop DBS and cardiac stimulation | Investigation of circuit-level techniques aiming at the suppression of stimulation artefacts. |
| Investigation of biosignal acquisition architectures with and without analog front-ends | Typical signal readout topologies incorporate a customised for the targeted biosignal front-end. In this project the student will investigate the possibility of realising simpler data acquisition architectures which do not benefit from the presence such a customised front-end. Such a bold move: a) leads to simpler and lower power overall designs, a fact which supports wearability, but b) comes at the expense of less good quality of the recorded signal; however, the quality of the interfaced signal can be restored by post-processing in the digital domain. Design and performance trade-offs between architectures with and without front-ends will also be investigated. Skills: mastering simulation tools; design and physical layout of pcbs. |
| Design of Capacitorless High-performance EMG and EEG Application Specific Integrated Circuit (ASIC) in 180nm technology | Recently we have shown that it is possible to realise very large time-constants without the use of very large in area standard integrated capacitors. More specifically, it has been shown that by combining MOS-based capacitors (MOSCAPs) and large-value pseudo-resistors, it is possible to create such large time-constants needed for band-limiting the EEG and EMG biosignals. The very high capacitive density (capacitance per unit area) of MOSCAPs leads to very significant capacitor area reductions. We have already built 64/128 channel 24-bit conversion ASICs-based nodes in 0.35um AMS technology. This project is about investigating the migration of the above basic techniques and designs onto newer 180nm technologies; comparative performance trade-offs will also be investigated. Skills: mastering Cadence simulation tools and ASICs design. |
| Experimental evaluation and optimisation of a new Simultaneous Impedance-Electromyography Recorder (SIER) instrument | Recently we have been working on the realisation of a new multi-channel skin stimulation and EMG recording instrument (concurrent stimulation and recording). SIER instruments can offer unique highly sought after real-time impedance and sEMG information which in turn can be used for feeding learning models and/or robotic/prosthetic devices. The project will involve the conduction of electrode-skin contact impedance (ESCI) and sEMG recording experiments by means of an already existing SIER instrument, the optimisation of the SIER instrument and the study of techniques for the post-processing of ESCI and SEMG data. Skills: mastering lab-based/application testing skills and matlab-based post-processing skills. |
| Study of the High-Pass Pole Shifting technique for Bidirectional Electrophysiological Interfaces | In general, closed-loop neurostimulation setups underpin the scheme: (1)stimulation > (2)readout of the stimulation response > (3)making sense of the read out response to the stimulation > (4)control/adjustment of the stimulation chacteristics > (1) delivery of adjusted stimulation. Such general schemes have been under investigation for conditions such as Parkinsons, tremor etc. The High-Pass Pole Shifting technique focuses on step (2) of the closed-loop scheme described above and involves timely and clever switching of carefully chosen and adjustable analog-front-end blocks which facilitate the readout stage (3) without sacrificing quality of the target biosignal (which, typically, is much weaker than the stimulation artefacts) by minimising the impact of stimulation artefacts during recording. This project is primarily a simulation project which will investigate the incorporation of fractional order transfer functions in the switched blocks of the front-end and in particular their effect in the recovery time of the targeted weak biosignal in the presence of strong stimulation artefacts. Skills: mastering simulation tools, fractional order systems and matlab-based processing and analysis tools. |
| Migration of the High-Pass Pole Shifting technique for Bidirectional Electrophysiological Interfaces on silicon | In general, closed-loop neurostimulation setups underpin the scheme: (1)stimulation > (2)readout of the stimulation response > (3)making sense of the read out response to the stimulation > (4)control/adjustment of the stimulation chacteristics > (1) delivery of adjusted stimulation. Such general schemes have been under investigation for conditions such Parkinsons, tremor etc. The High-Pass Pole Shifting technique focuses on step (2) of the closed-loop scheme described above and involves timely and clever switching of carefully chosen and adjustable analog-front-end blocks which facilitate the readout stage (3) without sacrificing quality of the target biosignal (which, typically, is much weaker than stimulation artefacts) by minimising the impact of stimulation artefacts during recording. The new technique has been confirmed by pcb-level prototypes. This project aims at investigating the migration and realisation of the High-Pass Pole Shifting technique at silicon/ Application Specific Integrated Circuit (ASIC) level. Skills: mastering Cadence simulation tools; depending on time constraints opportunities for layout as well. |
Profile: https://profiles.imperial.ac.uk/r.tanaka
Contact details: r.tanaka@imperial.ac.uk
| Project title | Description |
| Systems biology approach for mechanistic understanding of paediatric asthma exacerbations | Asthma is the most common chronic disease of childhood, affecting up to 10% of children in Westernised societies and 200,000,000 individuals worldwide. Many factors indicate the importance of the microbiome in asthma. Asthma is rare in rural societies, and its prevalence has been increasing markedly in the developing world as populations become urbanised. Exacerbations of asthma are often precipitated by otherwise trivial viral infections. Our studies have shown that the normal human airways contain a characteristic microbiome that is altered in children and adults with the illness. Asthmatic airways contain an excess of pathogens (which may damage the airways) and also lack particular commensal species that may be necessary for normal airway functions. This project will take a systems biology approach, by combining experiments with primary bronchial epithelial cells, in silico modelling, and clinical data analysis, to elucidate the effects of the airway bacterial microbiome in asthma, and the role of epithelium barrier integrity in disease initiation and control. We already have - a preliminary mathematical model that will be used to quantify the dynamic interactions among pathogen, commensals at the airway surface, the airway barrier and the immune system, - preliminary data from in vitro experiments, and - clinical data to be analysed. The student(s) will conduct several computational methods to identify the model structures and model parameters, using Matlab. |
| Uncovering dynamical interactions in the altered microbiota of atopic dermatitis skin – towards designing live therapeutics | Atopic dermatitis/eczema (AD) is a devastating and very common chronic skin disease affecting 15-30% of children worldwide. The ultimate aim of this project is to design live therapeutics for AD. Healthy skin is habitated by a rich, balanced diversity of microbes, which help protect our body from invading pathogens and infections. However, this balance is thrown off on AD skin – with a microbiome dominated by staphylococci, primarily opportunistic pathogen, S. aureus (SA). SA release peptides (via Agr quorum sensing (QS) system) that kill competitor microbes and damage the skin barrier, exacerbating AD symptoms. Other “friendly†staphylococci, such as S. epidermidis (SE) and S. hominis (SH), are an integral part of the healthy skin microbiome and appear to co-exist with SA on AD skin, although they are also “armed†through their own Agr QS systems. How does SA win the “battle†against SE and SH? Can we find a way to stop SA winning the battle and improve AD symptoms? This project aims to answer these two questions. The student will develop a simple mathematical model of the interspecies interactions [2], and fit the model to the experimental data to unveil the key interactions between SA, SE and SH. |
| Integrating multi-study data to identify key microbes driving skin health and disease | The skin microbiome plays an important role in maintaining skin health and contributing to disease1,2. An imbalanced skin microbiome, known as dysbiosis, is associated with conditions ranging from eczema3 to acne4. Despite the growing body of skin microbiome data deposited in public databases5, identifying the key microbes driving skin health and disease remains a challenge due to the high variability in skin microbiome compositions across studies6. This project aims to integrate data from multiple studies to identify the key microbes driving skin health and disease (e.g., eczema and psoriasis) by using statistical and machine learning approaches to untangle complex relationships within the skin microbiome. Identifying the key microbes driving skin health and disease will advance the development of data-informed in silico models of the skin microbiome. The aim of this project is to identify the key microbes in driving skin health and disease. |
| Development of in silico human skin microbiome models using 16S rRNA data | This project is suitable for students who want to gain following skills: 1) Empirical data processing and its pipeline development 2) Mathematical modelling of microbial communities and ecological dynamics 3) Computational modelling, parameter fitting, and data visualisation With recent advancements in next-generation sequencing (NGS) technologies, genetic information can now be analysed in a high-throughput and cost-effective manner. This increased availability of genomic data has since then fuelled the expansion of its research applications, such as 16S rRNA data in microbiome analysis. However, challenges exist in interfacing such data with mathematical models of microbiome dynamics. The model development process generally requires the availability of absolute abundance data to correctly infer both qualitative (e.g. whether microbes are facilitating or inhibiting each other’s growth) and quantitative features (e.g. intrinsic growth rates, strength of microbial interactions). However, absolute abundance data are not always available in 16S data sets, and even when the total microbial load is determined through quantitative PCR (qPCR), estimates obtained are susceptible to large coefficients of variation across technical replicates. Biological noise arising from variations of 16S gene copy number across different species also poses another hurdle between the conversion from gene counts to absolute abundances. This project aims to address the above challenges in applying 16S rRNA data to human skin microbiome modelling. We will first develop a data processing pipeline to convert 16S rRNA data into absolute abundance data. Next, we will use the obtained absolute abundances to model the human skin microbiome, by fitting to a simple generalised Lotka-Volterra (gLV) model. Finally, we will perform a critical evaluation of our data processing pipeline, by comparing the obtained models to those generated from the same data using other computational methods. |
| Extracting Patient-Centered Environmental and Treatment Factors from Text using Large Language Models for Eczema Causal Forecasting | Atopic Dermatitis (AD), commonly known as eczema, is a chronic inflammatory skin disease characterised by intense itching and recurrent eczematous lesions. Its pathogenesis is multifactorial, involving complex interactions between genetic predispositions, environmental triggers, and adherence to treatment regimens. A significant portion of patient experiences, particularly regarding daily activities, symptom fluctuations, and specific treatment responses, continues to be documented by clinicians or patients in tables or text formats such as diaries, messages, questionnaires, and interviews. These qualitative narratives contain rich, context-specific insights about symptom triggers and treatment efficacy that are often overlooked in conventional data analysis pipelines. Recent advancements in Large Language Models (LLMs), exemplified by models like GPT-4 and LLaMA-3, have revolutionised natural language processing by demonstrating capabilities in contextual understanding and structured information extraction from diverse text sources. These models present an opportunity to automatically extract structured, causally relevant features from raw patient narratives and reports. When synergistically combined with existing predictive models, such as EczemaPred, this approach could facilitate the development of a patient-centric system that can forecast how eczema symptoms may change over time. This project aims to enable causal forecasting of eczema symptoms to identify which treatments work best for individual patients and which environmental factors most strongly influence their symptoms. By doing so, we can move beyond population-level associations and toward personalised care, where interventions are tailored to each patient’s unique context, lifestyle, and symptom trajectory. |
| Improving Atopic Dermatitis Severity Forecasting with Transformer Models and Synthetic Time Series Augmentation | Atopic Dermatitis (AD), commonly known as eczema, is a chronic inflammatory skin disease characterized by intense itching and recurrent eczematous lesions. AD characterised by complex, patient-specific symptom dynamics. Accurate short-term prediction of severity (e.g., PO-SCORAD scores) is critical for timely interventions and personalised treatment. Our group previously developed EczemaPred, a Bayesian model for forecasting AD severity, and explored LSTM-based models, including Time-aware LSTM (T-LSTM). While these models showed promise, they were constrained by limited data availability and irregular sampling intervals. Recently, transformer architectures have emerged as state-of-the-art in time series forecasting. Unlike LSTMs, transformers can model long-range dependencies and handle variable-length sequences without requiring consistent time intervals. However, their performance is often constrained by data scarcity, as transformers are known to be data-hungry models. This project will explore synthetic time series generation based on existing patient trajectories to augment the training dataset. This may involve models such as variational autoencoders (VAEs), generative adversarial networks (GANs), or diffusion models adapted to time series, with the goal of improving downstream forecasting performance. |
Profile: https://profiles.imperial.ac.uk/rylie.green
Contact details: rylie.green@imperial.ac.uk
| Project title | Description |
| Living Bionics: Stimulation to drive neural network development | Electrical stimulation has been demonstrated to induce directional neurite growth in various cell types, both human and non-human using biphasic stimulation. This research project aims to evaluate a range of sinusoid stimulation frequencies to drive activity, growth and release of neurotransmitters of developing neurons using a cell stimulation rig made in house. |
| Spinal cord bridge | Nerve regeneration in an injured spinal cord is often restricted. One possible reason may be the lack of topographical signals from the material constructs to provide contact guidance to invading cells or re-growing axons. This research project aims to evaluate electroactive scaffolds and study device topographical effects on neural and glial cell behavior. |
| In-Ear EEG - signal detection | Implementing signal processing and ML for detection of abnormal brain activity in TBI when monitored through in-ear electrodes |
| Injectable electrodes: Colloidal systems for conductive nanoelectronics | This project is about biomaterials development to make injectable systems that are electrically addressable |
| Implanted device development for targeted in-tumour delivery of chemotherapeutics | Design and prototyping of device for delivery of electronic chemotherapy. Understanding fabrication and drug loading interactions and subsequent impact on delivery into brain tumour tissues. |
Profile: https://profiles.imperial.ac.uk/sophie.morse11
Contact details: sophie.morse11@imperial.ac.uk
| Project title | Description |
| Non-invasive manipulation and imaging of the brains immune system | Our brain has its own dedicated immune system and rapid response team: microglia. These cells actively survey the brain, clearing away toxins and pathogens. The ability to temporarily stimulate microglia has generated much excitement, due to its potential to treat brain diseases. For example, stimulating microglia can help clear away the amyloid-beta plaques that build up in Alzheimers disease. Focused ultrasound is a non-invasive and targeted technology that can stimulate microglia in any region of the brain. However, how ultrasound is stimulating these crucially important cells is unknown. This project aims to visualise whether focused ultrasound stimulates PIEZO1 mechanically sensitive ion channels in microglia to better understand the mechanism of this stimulation (expertise in Dr Morses group). A genetically-encoded fluorescent reporter based on PIEZO1, GenEPi, developed in Dr Pantazis group, will be used to visualise whether ultrasound is stimulating these ion channels, that play multiple roles in the activation of microglia. The student will design a setup to simultaneously image the activity of PIEZO1 with confocal microscopy while performing ultrasound stimulation, which will be tested in a microglial cell line. These results will provide invaluable insight into the mechanism of how focused ultrasound stimulates microglia, allowing ultrasound treatments to be optimised to achieve improved beneficial therapeutic effects for the treatment of neurological disorders, such as Alzheimers disease. |
| Can focused ultrasound delay Alzheimer's disease? | Focused ultrasound is a technology that has very recently shown to restore cognitive function in Alzheimer's disease patients. This is a non-invasive technology that can be focused onto specific regions of the brain. One theory is that this technology can restore cognition by stimulating the innate immune cells of the brain as well as neuronal function and health. In this project you will explore 1) whether this same technology can be used to delay Alzheimer's as well as restore cognition and 2) delve into exploring the mechanisms behind why these effects are observed. This will involve working with mouse brain tissue, sectioning, imaging, staining and fluorescence microscopy. |
| Can focused ultrasound delay brain ageing? | Focused ultrasound is a technology that has very recently shown to restore cognitive function in Alzheimer's disease mice and patients. This is a non-invasive technology that can be focused onto specific regions of the brain. One theory is that this technology can restore cognition by stimulating the innate immune cells of the brain as well as neuronal function and health. In this project you will explore 1) whether this same technology can be used to delay age-related cognitive decline, as well as restore cognition in Alzheimer's disease, and 2) delve into exploring the mechanisms behind why these effects are observed. This will involve working with mouse brain tissue, sectioning, imaging, staining and fluorescence microscopy. |
| The effects of focused ultrasound on neural activity and cognition in young and aged mice | Focused ultrasound is a technology that has very recently shown to restore cognitive function in Alzheimer's disease mice and patients. This is a non-invasive technology that can be focused onto specific regions of the brain. One theory is that this technology can restore cognition by stimulating the innate immune cells of the brain as well as neuronal function and health. In this project you will explore 1) whether this same technology can be used to delay age-related cognitive decline, as well as restore cognition in Alzheimer's disease, and 2) delve into exploring the mechanisms behind why these effects are observed. This will involve working with mouse brain tissue, sectioning, imaging, staining and fluorescence microscopy. |
| Developing an algorithm to quantify how therapeutic ultrasound can modulate the brain's immune cells | Glia are essential for the brain to function properly and they are also heavily involved when the brain does not function properly. These glial cells are involved in the immune response of the brain, clearing unwanted substances and regulating neuronal activity. We are investigating how therapeutic ultrasound can be used as a tool to modulate the behaviour of these cells with the potential to help treat brain diseases, such as brain tumours, Alzheimers disease and Parkinsons disease. We have developed an algorithm to quantify changes in the morphology of these glial cells in an automated way - which is saving so many hours of work. In this project, we want to take it a step further. Can we also quantify images of human brain tissue? What about multiple markers of glial cell activity at the same time? Can we automate the segmentation of white and grey matter regions? This project will be on the computation image processing side of the work, however, you can also shadow and/or get experience with the actual staining and imaging of mouse brain tissue to see how the images are acquired. Ultimately, with this project you will enabling the powerful ability of ultrasound, as a therapeutic technology to modulate cellular activity in the brain, to be taken a step further. Good programming/image processing skills needed. |