Imperial College London

Professor Anil Anthony Bharath

Faculty of EngineeringDepartment of Bioengineering

Academic Director (Singapore)
 
 
 
//

Contact

 

+44 (0)20 7594 5463a.bharath Website

 
 
//

Location

 

4.12Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

208 results found

Liu Y, Zou Z, Yang Y, Law N-FB, Bharath AAet al., 2021, Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction, SENSORS, Vol: 21

Journal article

Rodrigues J, Bharath A, Overby D, 2021, Automated machine learning detection of transcellular pores in Schlemm's canal endothelial cells exposed to stretch, Annual Meeting of the Association-for-Research-in-Vision-and-Ophthalmology (ARVO), Publisher: ASSOC RESEARCH VISION OPHTHALMOLOGY INC, ISSN: 0146-0404

Conference paper

Davis BM, Guo L, Ravindran N, Shamsher E, Baekelandt V, Mitchell H, Bharath AA, De Groef L, Cordeiro MFet al., 2020, Dynamic changes in cell size and corresponding cell fate after optic nerve injury, Scientific Reports, Vol: 10, ISSN: 2045-2322

Identifying disease-specific patterns of retinal cell loss in pathological conditions has been highlighted by the emergence of techniques such as Detection of Apoptotic Retinal Cells and Adaptive Optics confocal Scanning Laser Ophthalmoscopy which have enabled single-cell visualisation in vivo. Cell size has previously been used to stratify Retinal Ganglion Cell (RGC) populations in histological samples of optic neuropathies, and early work in this field suggested that larger RGCs are more susceptible to early loss than smaller RGCs. More recently, however, it has been proposed that RGC soma and axon size may be dynamic and change in response to injury. To address this unresolved controversy, we applied recent advances in maximising information extraction from RGC populations in retinal whole mounts to evaluate the changes in RGC size distribution over time, using three well-established rodent models of optic nerve injury. In contrast to previous studies based on sampling approaches, we examined the whole Brn3a-positive RGC population at multiple time points over the natural history of these models. The morphology of over 4 million RGCs was thus assessed to glean novel insights from this dataset. RGC subpopulations were found to both increase and decrease in size over time, supporting the notion that RGC cell size is dynamic in response to injury. However, this study presents compelling evidence that smaller RGCs are lost more rapidly than larger RGCs despite the dynamism. Finally, using a bootstrap approach, the data strongly suggests that disease-associated changes in RGC spatial distribution and morphology could have potential as novel diagnostic indicators.

Journal article

Lino M, Cantwell C, Fotiadis S, Pignatelli E, Bharath Aet al., 2020, Simulating surface wave dynamics with convolutional networks, Publisher: arXiv

We investigate the performance of fully convolutional networks to simulatethe motion and interaction of surface waves in open and closed complexgeometries. We focus on a U-Net architecture and analyse how well itgeneralises to geometric configurations not seen during training. Wedemonstrate that a modified U-Net architecture is capable of accuratelypredicting the height distribution of waves on a liquid surface within curvedand multi-faceted open and closed geometries, when only simple box andright-angled corner geometries were seen during training. We also consider aseparate and independent 3D CNN for performing time-interpolation on thepredictions produced by our U-Net. This allows generating simulations with asmaller time-step size than the one the U-Net has been trained for.

Working paper

Lourenco A, Kerfoot E, Dibblin C, Chubb H, Bharath A, Correia T, Varela Met al., 2020, Automatic estimation of left atrial function from short axis CINE-MRI using machine learning, European-Society-of-Cardiology (ESC) Congress, Publisher: OXFORD UNIV PRESS, Pages: 229-229, ISSN: 0195-668X

Conference paper

Howard JP, Zaman S, Ragavan A, Hall K, Leonard G, Sutanto S, Ramadoss V, Razvi Y, Linton NF, Bharath A, Shun-Shin M, Rueckert D, Francis D, Cole Get al., 2020, Automated analysis and detection of abnormalities in transaxial anatomical cardiovascular magnetic resonance images: a proof of concept study with potential to optimize image acquisition, International Journal of Cardiovascular Imaging, Vol: 37, Pages: 1033-1042, ISSN: 1569-5794

The large number of available MRI sequences means patients cannot realistically undergo them all, so the range of sequences to be acquired during a scan are protocolled based on clinical details. Adapting this to unexpected findings identified early on in the scan requires experience and vigilance. We investigated whether deep learning of the images acquired in the first few minutes of a scan could provide an automated early alert of abnormal features. Anatomy sequences from 375 CMR scans were used as a training set. From these, we annotated 1500 individual slices and used these to train a convolutional neural network to perform automatic segmentation of the cardiac chambers, great vessels and any pleural effusions. 200 scans were used as a testing set. The system then assembled a 3D model of the thorax from which it made clinical measurements to identify important abnormalities. The system was successful in segmenting the anatomy slices (Dice 0.910) and identified multiple features which may guide further image acquisition. Diagnostic accuracy was 90.5% and 85.5% for left and right ventricular dilatation, 85% for left ventricular hypertrophy and 94.4% for ascending aorta dilatation. The area under ROC curve for diagnosing pleural effusions was 0.91. We present proof-of-concept that a neural network can segment and derive accurate clinical measurements from a 3D model of the thorax made from transaxial anatomy images acquired in the first few minutes of a scan. This early information could lead to dynamic adaptive scanning protocols, and by focusing scanner time appropriately and prioritizing cases for supervision and early reporting, improve patient experience and efficiency.

Journal article

Dai T, Liu H, Bharath A, 2020, Episodic self-imitation learning with hindsight, Electronics (Basel), Vol: 9, ISSN: 2079-9292

Episodic self-imitation learning, a novel self-imitation algorithm with a trajectory selection module and an adaptive loss function, is proposed to speed up reinforcement learning. Compared to the original self-imitation learning algorithm, which samples good state–action pairs from the experience replay buffer, our agent leverages entire episodes with hindsight to aid self-imitation learning. A selection module is introduced to filter uninformative samples from each episode of the update. The proposed method overcomes the limitations of the standard self-imitation learning algorithm, a transitions-based method which performs poorly in handling continuous control environments with sparse rewards. From the experiments, episodic self-imitation learning is shown to perform better than baseline on-policy algorithms, achieving comparable performance to state-of-the-art off-policy algorithms in several simulated robot control tasks. The trajectory selection module is shown to prevent the agent learning undesirable hindsight experiences. With the capability of solving sparse reward problems in continuous control settings, episodic self-imitation learning has the potential to be applied to real-world problems that have continuous action spaces, such as robot guidance and manipulation.

Journal article

Brook J, Kim M-Y, Koutsoftidis S, Pitcher D, Agha-Jaffar D, Sufi A, Jenkins C, Tzortzis K, Ma S, Jabbour R, Houston C, Handa B, Li X, Chow J-J, Jothidasan A, Bristow P, Perkins J, Harding S, Bharath A, Ng FS, Peters N, Cantwell C, Chowdhury Ret al., 2020, Development of a pro-arrhythmic ex vivo intact human and porcine model: cardiac electrophysiological changes associated with cellular uncoupling, Pflügers Archiv European Journal of Physiology, Vol: 472, Pages: 1435-1446, ISSN: 0031-6768

We describe a human and large animal Langendorff experimental apparatus for live electrophysiological studies and measure the electrophysiological changes due to gap-junction uncoupling in human and porcine hearts. The resultant ex vivo intact human and porcine model can bridge the translational gap between smaller simple laboratory models and clinical research. In particular, electrophysiological models would benefit from the greater myocardial mass of a large heart due to its effects on far field signal, electrode contact issues and motion artefacts, consequently more closely mimicking the clinical setting Porcine (n=9) and human (n=4) donor hearts were perfused on a custom-designed Langendorff apparatus. Epicardial electrograms were collected at 16 sites across the left atrium and left ventricle. 1mM of carbenoxolone was administered at 5ml/min to induce cellular uncoupling, and then recordings were repeated at the same sites. Changes in electrogram characteristics were analysed.We demonstrate the viability of a controlled ex vivo model of intact porcine and human hearts for electrophysiology with pharmacological modulation. Carbenoxolone reduces cellular coupling and changes contact electrogram features. The time from stimulus artefact to (-dV/dt)max increased between baseline and carbenoxolone (47.9±4.1ms to 67.2±2.7ms) indicating conduction slowing. The features with the largest percentage change between baseline to Carbenoxolone were Fractionation +185.3%, Endpoint amplitude -106.9%, S-Endpoint Gradient +54.9%, S Point, -39.4%, RS Ratio +38.6% and (-dV/dt)max -20.9%.The physiological relevance of this methodological tool is that it provides a model to further investigate pharmacologically-induced proarrhythmic substrates.

Journal article

Lourenço A, Kerfoot E, Dibblin C, Alskaf E, Anjari M, Bharath AA, King AP, Chubb H, Correia TM, Varela Met al., 2020, Left atrial ejection fraction estimation using SEGANet for fully automated segmentation of CINE MRI, Publisher: arXiv

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia,characterised by a rapid and irregular electrical activation of the atria.Treatments for AF are often ineffective and few atrial biomarkers exist toautomatically characterise atrial function and aid in treatment selection forAF. Clinical metrics of left atrial (LA) function, such as ejection fraction(EF) and active atrial contraction ejection fraction (aEF), are promising, buthave until now typically relied on volume estimations extrapolated fromsingle-slice images. In this work, we study volumetric functional biomarkers ofthe LA using a fully automatic SEGmentation of the left Atrium based on aconvolutional neural Network (SEGANet). SEGANet was trained using a dedicateddata augmentation scheme to segment the LA, across all cardiac phases, in shortaxis dynamic (CINE) Magnetic Resonance Images (MRI) acquired with full cardiaccoverage. Using the automatic segmentations, we plotted volumetric time curvesfor the LA and estimated LA EF and aEF automatically. The proposed methodyields high quality segmentations that compare well with manual segmentations(Dice scores [$0.93 \pm 0.04$], median contour [$0.75 \pm 0.31$] mm andHausdorff distances [$4.59 \pm 2.06$] mm). LA EF and aEF are also in agreementwith literature values and are significantly higher in AF patients than inhealthy volunteers. Our work opens up the possibility of automaticallyestimating LA volumes and functional biomarkers from multi-slice CINE MRI,bypassing the limitations of current single-slice methods and improving thecharacterisation of atrial function in AF patients.

Working paper

Uslu F, Varela M, Bharath AA, 2020, A semi-automatic method to segment the left atrium in MR volumes with varying slice numbers., 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Publisher: IEEE, Pages: 1198-1202

Atrial fibrillation (AF) is the most common sustained arrhythmia and is associated with dramatic increases in mortality and morbidity. Atrial cine MR images are increasingly used in the management of this condition, but there are few specific tools to aid in the segmentation of such data. Some characteristics of atrial cine MR (thick slices, variable number of slices in a volume) preclude the direct use of traditional segmentation tools. When combined with scarcity of labelled data and similarity of the intensity and texture of the left atrium (LA) to other cardiac structures, the segmentation of the LA in CINE MRI becomes a difficult task. To deal with these challenges, we propose a semi-automatic method to segment the left atrium (LA) in MR images, which requires an initial user click per volume. The manually given location information is used to generate a chamber location map to roughly locate the LA, which is then used as an input to a deep network with slightly over 0.5 million parameters. A tracking method is introduced to pass the location information across a volume and to remove unwanted structures in segmentation maps. According to the results of our experiments conducted in an in-house MRI dataset, the proposed method outperforms the U-Net [1] with a margin of 20 mm on Hausdorff distance and 0.17 on Dice score, with limited manual interaction.

Conference paper

Varela Anjari M, Queiros S, Anjari M, Correia T, King A, Bharath A, Lee Jet al., 2020, Strain maps of the left atrium imagedwith a novel high-resolutionCINEMRI protocol*, 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology, Publisher: IEEE, Pages: 1178-1181, ISSN: 1557-170X

To date, regional atrial strains have not been imaged in vivo, despite their potential to provide useful clinical information. To address this gap, we present a novel CINE MRI protocol capable of imaging the entire left atrium at an isotropic 2-mm resolution in one single breath-hold.As proof of principle, we acquired data in 10 healthy volunteers and 2 cardiovascular patients using this technique.We also demonstrated how regional atrial strains can be estimated from this data following a manual segmentation of the left atrium using automatic image tracking techniques.The estimated principal strains vary smoothly across the left atrium and have a similar magnitude to estimates reported in the literature.

Conference paper

Fotiadis S, Pignatelli E, Valencia ML, Cantwell C, Storkey A, Bharath AAet al., 2020, Comparing recurrent and convolutional neural networks for predicting wave propagation, Publisher: arXiv

Dynamical systems can be modelled by partial differential equations andnumerical computations are used everywhere in science and engineering. In thiswork, we investigate the performance of recurrent and convolutional deep neuralnetwork architectures to predict the surface waves. The system is governed bythe Saint-Venant equations. We improve on the long-term prediction overprevious methods while keeping the inference time at a fraction of numericalsimulations. We also show that convolutional networks perform at least as wellas recurrent networks in this task. Finally, we assess the generalisationcapability of each network by extrapolating in longer time-frames and indifferent physical settings.

Working paper

Dai T, Arulkumaran K, Gerbert T, Tukra S, Behbahani F, Bharath AAet al., 2020, Analysing deep reinforcement learning agents trained with domain randomisation, Publisher: arXiv

Deep reinforcement learning has the potential to train robots to performcomplex tasks in the real world without requiring accurate models of the robotor its environment. A practical approach is to train agents in simulation, andthen transfer them to the real world. One popular method for achievingtransferability is to use domain randomisation, which involves randomlyperturbing various aspects of a simulated environment in order to make trainedagents robust to the reality gap. However, less work has gone intounderstanding such agents - which are deployed in the real world - beyond taskperformance. In this work we examine such agents, through qualitative andquantitative comparisons between agents trained with and without visual domainrandomisation. We train agents for Fetch and Jaco robots on a visuomotorcontrol task and evaluate how well they generalise using different testingconditions. Finally, we investigate the internals of the trained agents byusing a suite of interpretability techniques. Our results show that the primaryoutcome of domain randomisation is more robust, entangled representations,accompanied with larger weights with greater spatial structure; moreover, thetypes of changes are heavily influenced by the task setup and presence ofadditional proprioceptive inputs. Additionally, we demonstrate that our domainrandomised agents require higher sample complexity, can overfit and moreheavily rely on recurrent processing. Furthermore, even with an improvedsaliency method introduced in this work, we show that qualitative studies maynot always correspond with quantitative measures, necessitating the combinationof inspection tools in order to provide sufficient insights into the behaviourof trained agents.

Working paper

Flageat M, Arulkumaran K, Bharath AA, 2020, Incorporating human priors into deep reinforcement learning for robotic control, Pages: 229-234

Deep reinforcement learning (DRL) shows promise for robotic control, as it scales to high-dimensional observations and does not require a model of the robot or environment. However, properties such as control continuity or movement smoothness, which are desirable for application in the real world, will not necessarily emerge from training on reward functions based purely on task success. Inspired by human neuromotor control and movement analysis literature, we define a modular set of costs that promote more efficient, human-like movement policies. Using a simulated 3-DoF manipulator robot, we demonstrate the benefits of these costs by incorporating them into the training of a model-free DRL algorithm and decision-time planning of a trained model-based DRL algorithm. We also quantify these benefits through metrics based on the same literature, which allows for greater interpretability of learned policies-a common concern when learning policies with powerful and complex function approximators.

Conference paper

Uslu F, Bass C, Bharath AA, 2020, PERI-Net: a parameter efficient residual inception network for medical image segmentation, TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, Vol: 28, Pages: 2261-2277, ISSN: 1300-0632

Journal article

Sorteberg WE, Garasto S, Cantwell CC, Bharath AAet al., 2020, Approximating the Solution of Surface Wave Propagation Using Deep Neural Networks

Poster

Sarrico M, Arulkumaran K, Agostinelli A, Richemond P, Bharath AAet al., 2019, Sample-efficient reinforcement learning with maximum entropy mellowmax episodic control, Publisher: arXiv

Deep networks have enabled reinforcement learning to scale to more complexand challenging domains, but these methods typically require large quantitiesof training data. An alternative is to use sample-efficient episodic controlmethods: neuro-inspired algorithms which use non-/semi-parametric models thatpredict values based on storing and retrieving previously experiencedtransitions. One way to further improve the sample efficiency of theseapproaches is to use more principled exploration strategies. In this work, wetherefore propose maximum entropy mellowmax episodic control (MEMEC), whichsamples actions according to a Boltzmann policy with a state-dependenttemperature. We demonstrate that MEMEC outperforms other uncertainty- andsoftmax-based exploration methods on classic reinforcement learningenvironments and Atari games, achieving both more rapid learning and higherfinal rewards.

Working paper

Agostinelli A, Arulkumaran K, Sarrico M, Richemond P, Bharath AAet al., 2019, Memory-efficient episodic control reinforcement learning with dynamic online k-means

Recently, neuro-inspired episodic control (EC) methods have been developed toovercome the data-inefficiency of standard deep reinforcement learningapproaches. Using non-/semi-parametric models to estimate the value function,they learn rapidly, retrieving cached values from similar past states. Inrealistic scenarios, with limited resources and noisy data, maintainingmeaningful representations in memory is essential to speed up the learning andavoid catastrophic forgetting. Unfortunately, EC methods have a large space andtime complexity. We investigate different solutions to these problems based onprioritising and ranking stored states, as well as online clusteringtechniques. We also propose a new dynamic online k-means algorithm that is bothcomputationally-efficient and yields significantly better performance atsmaller memory sizes; we validate this approach on classic reinforcementlearning environments and Atari games.

Working paper

Kothari S, Gionfrida L, Bharath AA, Abraham Set al., 2019, Artificial Intelligence (AI) and rheumatology: a potential partnership, RHEUMATOLOGY, Vol: 58, Pages: 1894-1895, ISSN: 1462-0324

Journal article

Balaram S, Arulkumaran K, Dai T, Bharath AAet al., 2019, A maximum entropy deep reinforcement learning neural tracker, 10th International Workshop on Machine Learning in Medical Imaging (MLMI) / 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 400-408, ISSN: 0302-9743

Tracking of anatomical structures has multiple applications in the field of biomedical imaging, including screening, diagnosing and monitoring the evolution of pathologies. Semi-automated tracking of elongated structures has been previously formulated as a problem suitable for deep reinforcement learning (DRL), but it remains a challenge. We introduce a maximum entropy continuous-action DRL neural tracker capable of training from scratch in a complex environment in the presence of high noise levels, Gaussian blurring and detractors. The trained model is evaluated on two-photon microscopy images of mouse cortex. At the expense of slightly worse robustness compared to a previously applied DRL tracker, we reach significantly higher accuracy, approaching the performance of the standard hand-engineered algorithm used for neuron tracing. The higher sample efficiency of our maximum entropy DRL tracker indicates its potential of being applied directly to small biomedical datasets.

Conference paper

Bharath A, 2019, AI to assess mechanisms of glaucoma and other diseases, Annual Meeting of the Association-for-Research-in-Vision-and-Ophthalmology (ARVO), Publisher: ASSOC RESEARCH VISION OPHTHALMOLOGY INC, ISSN: 0146-0404

Conference paper

Garasto S, Nicola W, Bharath A, Schultz Set al., 2019, Neural sampling strategies for visual stimulus reconstruction from two-photon imaging of mouse primary visual cortex, 2019 9th International IEEE/EMBS Conference on Neural Engineering(NER), Publisher: IEEE

Interpreting the neural code involves decoding the firing pattern of sensory neurons from the perspective of a downstream population. Performing such a read-out is an essential step for the understanding of sensory information processing in the brain and has implications for Brain-Machine Interfaces. While previous work has focused on classification algorithms to categorize stimuli using a predefined set of labels, less attention has been given to full-stimulus reconstruction, especially from calcium imaging recordings. Here, we attempt a pixel-by-pixel reconstruction of complex natural stimuli from two-photon calcium imaging of 103 neurons in layer 2/3 of mouse primary visual cortex. Using an optimal linear estimator, we investigated which factors drive the reconstruction performance at the pixel level. We find the density of receptive fields to be the most influential feature. Finally, we use the receptive field data and simulations from a linear-nonlinear Poisson model to extrapolate decoding accuracy as a function of network size. Based on our analysis on a public dataset, reconstruction performance using two-photon protocols might be considerably improved if the receptive fields are sampled more uniformly in the full visual field. These results provide practical experimental guidelines to boost the accuracy of full-stimulus reconstruction.

Conference paper

Creswell A, Bharath AA, 2019, Denoising adversarial autoencoders, IEEE Transactions on Neural Networks and Learning Systems, Vol: 30, Pages: 968-984, ISSN: 2162-2388

Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.

Journal article

Wang Y, Yang Y, Liu Y-X, Bharath AAet al., 2019, A Recursive Ensemble Learning Approach With Noisy Labels or Unlabeled Data, IEEE ACCESS, Vol: 7, Pages: 36459-36470, ISSN: 2169-3536

Journal article

Uslu F, Bharath AA, 2019, A recursive Bayesian approach to describe retinal vasculature geometry, Pattern Recognition, Vol: 87, Pages: 157-169, ISSN: 0031-3203

Deep networks have recently seen significant application to the analysis of medical image data, particularly for segmentation and disease classification. However, there are many situations in which the purpose of analysing a medical image is to perform parameter estimation, assess connectivity or determine geometric relationships. Some of these tasks are well served by probabilistic trackers, including Kalman and particle filters. In this work, we explore how the probabilistic outputs of a single-architecture deep network may be coupled to a probabilistic tracker, taking the form of a particle filter. The tracker provides information not easily available with current deep networks, such as a unique ordering of points along vessel centrelines and edges, whilst the construction of observation models for the tracker is simplified by the use of a deep network. We use the analysis of retinal images in several datasets as the problem domain, and compare estimates of vessel width in a standard dataset (REVIEW) with manually determined measurements.

Journal article

Bass C, Dai T, Billot B, Arulkumaran K, Creswell A, Clopath C, De Paola V, Bharath AAet al., 2019, Image synthesis with a convolutional capsule generative adversarial network, Medical Imaging with Deep Learning, Pages: 39-62

Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.

Conference paper

Cantwell C, Mohamied Y, Tzortzis K, Garasto S, Houston C, Chowdhury R, Ng F, Bharath A, Peters Net al., 2019, Rethinking multiscale cardiac electrophysiology with machine learning and predictive modelling, Computers in Biology and Medicine, Vol: 104, Pages: 339-351, ISSN: 0010-4825

We review some of the latest approaches to analysing cardiac electrophysiology data using machine learning and predictive modelling. Cardiac arrhythmias, particularly atrial fibrillation, are a major global healthcare challenge. Treatment is often through catheter ablation, which involves the targeted localised destruction of regions of the myocardium responsible for initiating or perpetuating the arrhythmia. Ablation targets are either anatomically defined, or identified based on their functional properties as determined through the analysis of contact intracardiac electrograms acquired with increasing spatial density by modern electroanatomic mapping systems. While numerous quantitative approaches have been investigated over the past decades for identifying these critical curative sites, few have provided a reliable and reproducible advance in success rates. Machine learning techniques, including recent deep-learning approaches, offer a potential route to gaining new insight from this wealth of highly complex spatio-temporal information that existing methods struggle to analyse. Coupled with predictive modelling, these techniques offer exciting opportunities to advance the field and produce more accurate diagnoses and robust personalised treatment. We outline some of these methods and illustrate their use in making predictions from the contact electrogram and augmenting predictive modelling tools, both by more rapidly predicting future states of the system and by inferring the parameters of these models from experimental observations.

Journal article

Sorteberg W, Garasto S, Cantwell C, Bharath Aet al., 2019, Approximating the solution of Surface Wave Propagation Using Deep Neural Networks, INNS Big Data and Deep Learning 2019, Publisher: Springer, ISSN: 2661-8141

Partial differential equations formalise the understanding of the behaviour of the physical world that humans acquire through experience and observation. Through their numerical solution, such equations are used to model and predict the evolution of dynamical systems. However, such techniques require extensive computational resources and assume the physics are prescribed \textit{a priori}. Here, we propose a neural network capable of predicting the evolution of a specific physical phenomenon: propagation of surface waves enclosed in a tank, which, mathematically, can be described by the Saint-Venant equations. The existence of reflections and interference makes this problem non-trivial. Forecasting of future states (i.e. spatial patterns of rendered wave amplitude) is achieved from a relatively small set of initial observations. Using a network to make approximate but rapid predictions would enable the active, real-time control of physical systems, often required for engineering design. We used a deep neural network comprising of three main blocks: an encoder, a propagator with three parallel Long Short-Term Memory layers, and a decoder. Results on a novel, custom dataset of simulated sequences produced by a numerical solver show reasonable predictions for as long as 80 time steps into the future on a hold-out dataset. Furthermore, we show that the network is capable of generalising to two other initial conditions that are qualitatively different from those seen at training time.

Conference paper

Dai T, Dubois M, Arulkumaran K, Campbell J, Bass C, Billot B, Uslu F, De Paola V, Clopath C, Bharath AAet al., 2019, Deep reinforcement learning for subpixel neural tracking, Medical Imaging with Deep Learning, Publisher: OpenReview, Pages: 130-150

Automatically tracing elongated structures, such as axons and blood vessels, is a challenging problem in the field of biomedical imaging, but one with many downstream applications. Real, labelled data is sparse, and existing algorithms either lack robustness to different datasets, or otherwise require significant manual tuning. Here, we instead learn a tracking algorithm in a synthetic environment, and apply it to tracing axons. To do so, we formulate tracking as a reinforcement learning problem, and apply deep reinforcement learning techniques with a continuous action space to learn how to track at the subpixel level. We train our model on simple synthetic data and test it on mouse cortical two-photon microscopy images. Despite the domain gap, our model approaches the performance of a heavily engineered tracker from a standard analysis suite for neuronal microscopy. We show that fine-tuning on real data improves performance, allowing better transfer when real labelled data is available. Finally, we demonstrate that our model's uncertainty measure-a feature lacking in hand-engineered trackers-corresponds with how well it tracks the structure.

Conference paper

Creswell A, Bharath A, 2018, Inverting the generator of a generative adversarial network, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 2162-2388

Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00101993&limit=30&person=true&page=2&respub-action=search.html