Student Projects

2021/22

Uncovering the Neural Code of DRL Agents

Neuroscience has evolved exquisite tools to probe the behaviour of neurons in biology. Yet, very few of these tools are applied to decipher the encoding of deep neural networks. This particularly true in the field of deep reinforcement learning (DRL), where layered artificial neural networks learn mappings from observations to policies, or control signals. In this project, we build on prior work in agents trained to perform visuo-motor control, such as guiding a robot arm towards a target. The aim is to answer specific questions on parameter distributions and activation statistics, comparing these as training algorithms are altered, or environments perturbed.


Neural Architectures for Predicting the Behaviour of Dynamical Systems

Fuelled by artificial neural architectures and backpropagation, data-driven approaches now dominate the engineering of systems for pattern recognition. However, the predictive modelling of the behaviour of complex dynamical systems – such as those governed by systems of coupled differential equations – remains challenging in two key ways: (i) long term prediction and (ii) out-of-distribution prediction. Recent progress in disentangled representations (Fotiadis et al, 2021) has nudged the field forward, but it is now time to return to the underlying neural architectures, seeking those that are better suited to the intrinsic dynamics implicit in a system of equations. We seek to explore different approaches to this problem, including Siamese network structures, progressive network growth or, perhaps, neurons which incorporate some form of plasticity.


Cyclic-Consistent Joint Representations for Diagnostic Images and Text

Medical image analysis has taken huge steps forward due to the emergence of practical machine learning algorithms based around the use of deep networks. However, the standard pipeline of medical image analysis involves a tedious process of human interpretation, sometimes with segmentation. Simple labelling is usually done, and this is fine, but does not scale well. Nor is it necessarily sustainable: when new imaging methods come along, one has to start from scratch again. The aim of this project is to generalise the principle of cycle-consistent training to provide a learning signal that can aid or regularise the learning of joint image/text representations. The project is done in collaboration with Third Eye Intelligence.

 

 

 

2017/18

Augmenting Generative Adversarial Networks with Natural Pre-image Priors

[KaiToni]  [#1594 on Bioeng Student Project Website]

Artificial neural networks can be trained by getting them to compete against each other on achieving certain tasks. Generative Adversarial Networks (GANs)  are those that, given a set of data, can be trained to generate new examples of data themselves. More concretely, a generative model, G, is trained to generate data from a prior distribution, whilst a discriminative model, D, is trained to distinguish between real data and data generated by G. By training G using the minimax rule, the generator can learn to produce samples that look like those from the real dataset. GANs have previously been trained to generate realistic images of faces and the interior appearance of rooms. The aim of this project is to further improve the quality of  generated data by using an additional training criterion, based on prior knowledge about the statistics of natural images. In particular, this will be based on regularisation techniques used to create “natural pre-images” – a concept used in techniques for visualising how artificial neural networks are “thinking”.  This project will be carried out in collaboration with PhD candidates in the BICV group, Toni Creswell and Kai Arulkumaran, and lies in the area of deep learning, subtopic: adversarial networks.


Machine Learning for Surgical Planning

[Materialise]  [#1658 on Bioeng Student Project Website]

In the past few years, pre-operative surgical planning has emerged as an important trend in orthopedic surgery as it allows the surgeon to plan his surgical approach before entering the operating room. Pre-operative planning typically starts with the reconstruction of a 3D model from 3D imaging data such as CT and MRI. This 3D model then serves as an input to identify anatomical landmarks that determine the size, position and orientation of an implant. One drawback or this approach is that it does not take surgical preferences into account. In this project, machine learning and related techniques are applied to improve the (initial) surgical plan by incorporating surgical preferences and including more information to base the surgical plan on. A system that learns from previous patients (of a given surgeon) is devised to suggest an improved pre-operative plan. This project will be done in collaboration with Materialise.


Learning Algorithms with Neural Turing Machines

[Kai]  [#1595 on Bioeng Student Project Website]

Recurrent neural networks have been shown to be Turing-complete, which means that they can theoretically model any computable function, i.e. implement any algorithm. However, program induction (inferring programs from input-output samples) is very difficult in practice. Neural Turing Machines (NTMs) extend neural networks with a “working memory” that allows them to store data for later computation. The architecture consists of a neural network controller, a memory matrix, and read and write heads. The aim of this project is to reproduce the NTM, and use it to learn the following algorithms: repeat copy, associative recall and priority sort. This project will be carried out with PhD student Kai Arulkumaran. This project is primarily in the area of deep reinforcement learning.


Tracking Axons with Generative Adversarial Imitation Learning

[CherKaiToniZehra]  [#1596 on Bioeng Student Project Website]

Reinforcement learning has the goal of learning a control policy for an agent – such as a robot or intelligent algorithm – with the goal of maximising the agent’s cumulative reward. The control policy is a mapping from states to actions of the agent. Imitation learning has the goal of inferring the reward function of an expert agent from observing its actions. Once the reward function is inferred, a control policy can be learned using reinforcement learning techniques. By using adversarial training, the policy can be directly inferred from the actions of the expert agent. The aim of this project is to learn a policy for performing tracking of biological axons in microscopy images by utilising generative adversarial (see project # 1594) imitation learning on existing algorithms for performing tracking in biomedical imaging.  This project will be carried out in collaboration with a team of PhD students, including Cher Bachar,  Kai Arulkumaran, Toni Creswell and Zehra Uslu; it lies at the interface between machine learning, biomedical image analysis and neuroscience.


Modal Dropout for Learning Visuomotor Control Policies

[Kai] [#1597 on Bioeng Student Project Website]

Dropout is a well-established technique for regularising the training of deep neural networks. It works by randomly “dropping” a subset of neurons during training, encouraging neurons within a network layer to become more independent. ModDrop extends the concept to multimodal data, allowing networks to learn cross-modality correlations whilst retaining modality-specific representations. The aim of this project is to use ModDrop to extend previous work on training a simulated Baxter robot with the deep Q-network reinforcement learning algorithm. In particular, we would like to train the robot with a RGB + depth camera, but eventually learn a policy that can be executed with an RGB-only camera. This project will be run in collaboration with PhD candidate Kai Arulkumaran, and lies at the interface between robotics, artificial intelligence and deep machine learning.


Visual Turing Test On Machine Generated Handwriting

[ToniStefania] [#1598 on Bioengineering Project Website]

Turing Tests are used to determine if a machine has human-like intelligence.  This is achieved by performing a series of experiments in which a human has to determine whether they are interacting with a human or a machine. To pass a Turing test, the machine must behave in a way that is indistinguishable from humans. We have used deep learning to train machines to dream up and generate new characters of alphabets. The aim of this project is to establish how realistic these generated images are by performing Visual Turing Tests on these imagined images. Experiments will involve presenting both real and generated examples of handwritten characters to a human audience and asking them to distinguish the real characters from the generated. For the machine to pass the visual turing test, a human should be unable to distinguish real samples from generated samples.  This project will be carried out in collaboration with PhD candidates Stefania Garasto and Toni Creswell, and involves the running of experiments on human perception. The applications of a machine that “imagines well” are numerous, but see Project #1599 for one example!


The Latent Space of Skin Lesions

[LucasToni] [#1599 on Bioeng Student Project Website]

A commercial partner wishes to have models of how skin lesions – cancerous or not – progress over time. We wish to build statistical models of how the lesions grow and predict how a new image of a skin lesion will grow given these previous examples.  A possible approach would be to use generative models,e.g. GANs, to explore the latent space of moles using a large unlabelled dataset, then refine the model by adding in the tracking that has been learned over the set of labelled data. We have a dataset of about 200 cases (100 benign/100 melanomas) where the lesion has been tracked over several years (perhaps 4-5 images per case). This project will be carried out in collaboration with the company Skin Analytics (Dr Lucas Hadjilucas) and PhD candidate Toni Creswell, and lies at the interface between artificial intelligence and medical image analysis.


3D data synthesis to train Artificial Neural Networks

[Cher] [#1613 on Bioeng Student Project Website]

Data synthesis is used in many different fields to augment existing real data-sets used for training and testing machine learning algorithms. In the field of Deep Learning, a large amount of data is required in order to train deep networks so that they are able to perform a task to an acceptable standard. However, the amount of data in many areas of biomedical imaging can be limited, and insufficient to train deep Artificial Neural Networks (ANNs). Generating artificial data is a good way to overcome this limitation. By generating anatomically plausible examples of biomedical structures like neurons. And currently, there is a need to develop automated methods to analyse 3D microscopy images of cortical neurons in order to standardise the analysis, and to produce more reliable and accurate results without introducing bias through manual/ semi-automated tools. The aim of this project is to generate realistic 3D examples of neuronal axons and their images in order to train ANNs to segment 3D axonal images, or detect axonal synapses, both forms of analysis that are required in neuroscience research. This project suits someone with a good physics background, or with an interest in deepening his/her theoretical understanding of imaging physics. At the same time, it involves writing signficant amounts of software to a high standard. This project will be carried in collaboration with PhD candidate Cher Bachar.


Using convolutional neural networks for detecting axonal synapses in 3D two-photon microscopy data

[CherZehra][#1614 on Bioeng Student Project Website]

Biomedical research often relies on microscopic imaging techniques. In neuroscience, we sometimes use two-photon microscopy, which yields 3D data containing structures of interest. However, the analysis of 3D biological neuron data is still largely manual/ semi-automated. This is due to the lack of availability of methods that accurately and reliably analyse this type of data. Unfortunately, the use of machine learning methods such as Support Vector Machines (SVMs) still don’t provide accurate enough results without user-intervention, and are prone to over-detecting blob-like noise. To find a solution to this problem, we propose using Deep Learning, as it has been effectively used to solve many difficult problems of image analysis. The aim of this project is to develop a Convolutional Neural Network (CNN) for the problem of object detection. The CNN will be trained on synthetic and/or real 3D examples of microscopic axonal data, in order to detect axonal boutons (synapses) in the images. This project will be carried in in collaboration with PhD candidates Cher Bachar and Zehra Uslu.


Head-Wearable 360 Degree Camera Build

[RNIB][#1408 on Bioeng Student Project Website]

One of the problems we face is replacing the human function of sight when it is lost or impaired. The “Picture This” project aims to develop algorithms and systems to aid human navigation.  Using forward facing cameras can be very helpful, but we are interested in the possibilities offered by 360 degree cameras that can be worn. In addition to be a potential source for navigation information, such cameras can be used to augment forward-facing vision (both eyes and cameras!), and provide relatively easy indications of routes that can be navigated (e.g. entrances, doorways etc).  Previous work suggests that low-resolution images carry a very large amount of distinct information about location. We would like to build a head-mountable 360-degree camera based around a Raspberry Pi, suitable optics, and a light camera. The information collected from normal navigation using this camera has the added advantage that privacy is reasonably well preserved, allowing such devices to be used with lower risk to humans. As part of this project, we would like to measure how much navigational information is retained in the distorted view provided by such an imaging system, and also compare its performance in situations of partial visual occlusion.