Imperial College London

Professor Anil Anthony Bharath

Faculty of EngineeringDepartment of Bioengineering

Academic Director (Singapore)
 
 
 
//

Contact

 

+44 (0)20 7594 5463a.bharath Website

 
 
//

Location

 

4.12Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

208 results found

Creswell A, Pouplin A, Bharath AA, 2018, Denoising Adversarial Autoencoders: Classifying Skin Lesions Using Limited Labelled Training Data., IET Computer Vision, Vol: abs/1801.00693, ISSN: 1751-9640

The authors propose a novel deep learning model for classifying medical images in the setting where there is a large amount of unlabelled medical data available, but the amount of labelled data is limited. They consider the specific case of classifying skin lesions as either benign or malignant. In this setting, the authors’ proposed approach – the semi-supervised, denoising adversarial autoencoder – is able to utilise vast amounts of unlabelled data to learn a representation for skin lesions, and small amounts of labelled data to assign class labels based on the learned representation. They perform an ablation study to analyse the contributions of both the adversarial and denoising components and compare their work with state-of-the-art results. They find that their model yields superior classification performance, especially when evaluating their model at high sensitivity values.

Journal article

Creswell A, Bharath AA, 2018, Inverting The Generator Of A Generative Adversarial Network (II)

Generative adversarial networks (GANs) learn a deep generative model that isable to synthesise novel, high-dimensional data samples. New data samples aresynthesised by passing latent samples, drawn from a chosen prior distribution,through the generative model. Once trained, the latent space exhibitsinteresting properties, that may be useful for down stream tasks such asclassification or retrieval. Unfortunately, GANs do not offer an "inversemodel", a mapping from data space back to latent space, making it difficult toinfer a latent representation for a given data sample. In this paper, weintroduce a technique, inversion, to project data samples, specifically images,to the latent space using a pre-trained GAN. Using our proposed inversiontechnique, we are able to identify which attributes of a dataset a trained GANis able to model and quantify GAN performance, based on a reconstruction loss.We demonstrate how our proposed inversion technique may be used toquantitatively compare performance of various GAN models trained on three imagedatasets. We provide code for all of our experiments,https://github.com/ToniCreswell/InvertingGAN.

Journal article

Creswell A, While T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AAet al., 2018, Generative adversarial networks: an overview, IEEE Signal Processing Magazine, Vol: 35, Pages: 53-65, ISSN: 1053-5888

Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this by deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image superresolution, and classification. The aim of this review article is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.

Journal article

Uslu F, Bharath AA, 2018, A Multi-task Network to Detect Junctions in Retinal Vasculature, Publisher: SPRINGER INTERNATIONAL PUBLISHING AG

Working paper

Creswell A, Mohamied Y, Sengupta B, Bharath AAet al., 2017, Adversarial Information Factorization

We propose a novel generative model architecture designed to learnrepresentations for images that factor out a single attribute from the rest ofthe representation. A single object may have many attributes which when altereddo not change the identity of the object itself. Consider the human face; theidentity of a particular person is independent of whether or not they happen tobe wearing glasses. The attribute of wearing glasses can be changed withoutchanging the identity of the person. However, the ability to manipulate andalter image attributes without altering the object identity is not a trivialtask. Here, we are interested in learning a representation of the image thatseparates the identity of an object (such as a human face) from an attribute(such as 'wearing glasses'). We demonstrate the success of our factorizationapproach by using the learned representation to synthesize the same face withand without a chosen attribute. We refer to this specific synthesis process asimage attribute manipulation. We further demonstrate that our model achievescompetitive scores, with state of the art, on a facial attribute classificationtask.

Journal article

Arulkumaran K, Deisenroth MP, Brundage M, Bharath AAet al., 2017, A brief survey of deep reinforcement learning, IEEE Signal Processing Magazine, Vol: 34, Pages: 26-38, ISSN: 1053-5888

Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higher-level understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.

Journal article

Arulkumaran K, Deisenroth MP, Brundage M, Bharath AAet al., 2017, A brief survey of deep reinforcement learning, IEEE Signal Processing Magazine, Vol: 34, Pages: 26-38, ISSN: 1053-5888

Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.

Journal article

Creswell A, Bharath AA, Sengupta B, 2017, LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined,wherein a non-robust ML system (classifiers, regressors, etc.) can be subjectto attacks using a wide variety of exploits. With the advent of scalable deeplearning methodologies, a lot of emphasis has been put on the robustness ofsupervised, unsupervised and reinforcement learning algorithms. Here, we studythe robustness of the latent space of a deep variational autoencoder (dVAE), anunsupervised generative framework, to show that it is indeed possible toperturb the latent space, flip the class predictions and keep theclassification probability approximately equal before and after an attack. Thismeans that an agent that looks at the outputs of a decoder would remainoblivious to an attack.

Journal article

Bass C, Helkkula P, De Paola V, Clopath C, Bharath AAet al., 2017, Detection of axonal synapses in 3D two-photon images, PLoS One, Vol: 12, Pages: 1-18, ISSN: 1932-6203

Studies of structural plasticity in the brain often require the detection and analysis of axonal synapses (boutons). To date, bouton detection has been largely manual or semi-automated, relying on a step that traces the axons before detection the boutons. If tracing the axon fails, the accuracy of bouton detection is compromised. In this paper, we propose a new algorithm that does not require tracing the axon to detect axonal boutons in 3D two-photon images taken from the mouse cortex. To find the most appropriate techniques for this task, we compared several well-known algorithms for interest point detection and feature descriptor generation. The final algorithm proposed has the following main steps: (1) a Laplacian of Gaussian (LoG) based feature enhancement module to accentuate the appearance of boutons; (2) a Speeded Up Robust Features (SURF) interest point detector to find candidate locations for feature extraction; (3) non-maximum suppression to eliminate candidates that were detected more than once in the same local region; (4) generation of feature descriptors based on Gabor filters; (5) a Support Vector Machine (SVM) classifier, trained on features from labelled data, and was used to distinguish between bouton and non-bouton candidates. We found that our method achieved a Recall of 95%, Precision of 76%, and F1 score of 84% within a new dataset that we make available for accessing bouton detection. On average, Recall and F1 score were significantly better than the current state-of-the-art method, while Precision was not significantly different. In conclusion, in this article we demonstrate that our approach, which is independent of axon tracing, can detect boutons to a high level of accuracy, and improves on the detection performance of existing approaches. The data and code (with an easy to use GUI) used in this article are available from open source repositories.

Journal article

Creswell A, Arulkumaran K, Bharath AA, 2017, On denoising autoencoders trained to minimise binary cross-entropy

Denoising autoencoders (DAEs) are powerful deep learning models used forfeature extraction, data generation and network pre-training. DAEs consist ofan encoder and decoder which may be trained simultaneously to minimise a loss(function) between an input and the reconstruction of a corrupted version ofthe input. There are two common loss functions used for training autoencoders,these include the mean-squared error (MSE) and the binary cross-entropy (BCE).When training autoencoders on image data a natural choice of loss function isBCE, since pixel values may be normalised to take values in [0,1] and thedecoder model may be designed to generate samples that take values in (0,1). Weshow theoretically that DAEs trained to minimise BCE may be used to takegradient steps in the data space towards regions of high probability under thedata-generating distribution. Previously this had only been shown for DAEstrained using MSE. As a consequence of the theory, iterative application of atrained DAE moves a data sample from regions of low probability to regions ofhigher probability under the data-generating distribution. Firstly, we validatethe theory by showing that novel data samples, consistent with the trainingdata, may be synthesised when the initial data samples are random noise.Secondly, we motivate the theory by showing that initial data samplessynthesised via other methods may be improved via iterative application of atrained DAE to those initial samples.

Working paper

Creswell A, Bharath AA, 2016, Inverting The Generator Of A Generative Adversarial Network

Generative adversarial networks (GANs) learn to synthesise new samples from ahigh-dimensional distribution by passing samples drawn from a latent spacethrough a generative network. When the high-dimensional distribution describesimages of a particular data set, the network should learn to generate visuallysimilar image samples for latent variables that are close to each other in thelatent space. For tasks such as image retrieval and image classification, itmay be useful to exploit the arrangement of the latent space by projectingimages into it, and using this as a representation for discriminative tasks.GANs often consist of multiple layers of non-linear computations, making themvery difficult to invert. This paper introduces techniques for projecting imagesamples into the latent space using any pre-trained GAN, provided that thecomputational graph is available. We evaluate these techniques on both MNISTdigits and Omniglot handwritten characters. In the case of MNIST digits, weshow that projections into the latent space maintain information about thestyle and the identity of the digit. In the case of Omniglot characters, weshow that even characters from alphabets that have not been seen duringtraining may be projected well into the latent space; this suggests that thisapproach may have applications in one-shot learning.

Working paper

Creswell A, Arulkumaran K, Bharath AA, 2016, Improving Sampling from Generative Autoencoders with Markov Chains

We focus on generative autoencoders, such as variational or adversarialautoencoders, which jointly learn a generative model alongside an inferencemodel. Generative autoencoders are those which are trained to softly enforce aprior on the latent distribution learned by the inference model. We call thedistribution to which the inference model maps observed samples, the learnedlatent distribution, which may not be consistent with the prior. We formulate aMarkov chain Monte Carlo (MCMC) sampling process, equivalent to iterativelydecoding and encoding, which allows us to sample from the learned latentdistribution. Since, the generative model learns to map from the learned latentdistribution, rather than the prior, we may use MCMC to improve the quality ofsamples drawn from the generative model, especially when the learned latentdistribution is far from the prior. Using MCMC sampling, we are able to revealpreviously unseen differences between generative autoencoders trained eitherwith or without a denoising criterion.

Working paper

Charalambous CC, Bharath AA, 2016, A data augmentation methodology for training machine/deep learning gait recognition algorithms

There are several confounding factors that can reduce the accuracy of gaitrecognition systems. These factors can reduce the distinctiveness, or alter thefeatures used to characterise gait, they include variations in clothing,lighting, pose and environment, such as the walking surface. Full invariance toall confounding factors is challenging in the absence of high-quality labelledtraining data. We introduce a simulation-based methodology and asubject-specific dataset which can be used for generating synthetic videoframes and sequences for data augmentation. With this methodology, we generateda multi-modal dataset. In addition, we supply simulation files that provide theability to simultaneously sample from several confounding variables. The basisof the data is real motion capture data of subjects walking and running on atreadmill at different speeds. Results from gait recognition experimentssuggest that information about the identity of subjects is retained withinsynthetically generated examples. The dataset and methodology allow studiesinto fully-invariant identity recognition spanning a far greater number ofobservation conditions than would otherwise be possible.

Working paper

Creswell A, Bharath AA, 2016, Task Specific Adversarial Cost Function

The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.

Journal article

Creswell A, Bharath AA, 2016, Adversarial training for sketch retrieval, Computer Vision – ECCV 2016 Workshops, Publisher: Springer Verlag, ISSN: 0302-9743

Generative Adversarial Networks (GAN) are able to learn excellentrepresentations for unlabelled data which can be applied to image generationand scene classification. Representations learned by GANs have not yet beenapplied to retrieval. In this paper, we show that the representations learnedby GANs can indeed be used for retrieval. We consider heritage documents thatcontain unlabelled Merchant Marks, sketch-like symbols that are similar tohieroglyphs. We introduce a novel GAN architecture with design features thatmake it suitable for sketch retrieval. The performance of this sketch-GAN iscompared to a modified version of the original GAN architecture with respect tosimple invariance properties. Experiments suggest that sketch-GANs learnrepresentations that are suitable for retrieval and which also have increasedstability to rotation, scale and translation compared to the standard GANarchitecture.

Conference paper

Charalambous C, Bharath AA, 2016, A data augmentation methodology for training machine/deep learning gait recognition algorithms, British Machine Vision Conference, Publisher: BMVA Press, Pages: 110.1-110.12

There are several confounding factors that can reduce the accuracy of gait recognition systems. These factors can reduce the distinctiveness, or alter the features used to characterise gait; they include variations in clothing, lighting, pose and environment, such as the walking surface. Full invariance to all confounding factors is challenging in the absence of high-quality labelled training data. We introduce a simulation-based methodology and a subject-specific dataset which can be used for generating synthetic video frames and sequences for data augmentation. With this methodology, we generated a multi-modal dataset. In addition, we supply simulation files that provide the ability to simultaneously sample from several confounding variables. The basis of the data is real motion capture data of subjects walking and running on a treadmill at different speeds. Results from gait recognition experiments suggest that information about the identity of subjects is retained within synthetically generated examples. The dataset and methodology allow studies into fully-invariant identity recognition spanning a far greater number of observation conditions than would otherwise be possible.

Conference paper

Rivera-Rubio J, Arulkumaran K, Rishi H, Alexiou I, Bharath AAet al., 2016, An assistive haptic interface for appearance-based indoor navigation, Computer Vision and Image Understanding, Vol: 149, Pages: 126-145, ISSN: 1077-3142

Computer vision remains an under-exploited technology for assistive devices. Here, we propose a navigation technique using low-resolution images from wearable or hand-held cameras to identify landmarks that are indicative of a user’s position along crowdsourced paths. We test the components of a system that is able to provide blindfolded users with information about location via tactile feedback. We assess the accuracy of vision-based localisation by making comparisons with estimates of location derived from both a recent SLAM-based algorithm and from indoor surveying equipment. We evaluate the precision and reliability by which location information can be conveyed to human subjects by analysing their ability to infer position from electrostatic feedback in the form of textural (haptic) cues on a tablet device. Finally, we describe a relatively lightweight systems architecture that enables images to be captured and location results to be served back to the haptic device based on journey information from multiple users and devices.

Journal article

Arulkumaran K, Dilokthanakul N, Shanahan M, Bharath AAet al., 2016, Classifying options for deep reinforcement learning, Publisher: IJCAI

Deep reinforcement learning is the learning of multiple levels ofhierarchical representations for reinforcement learning. Hierarchicalreinforcement learning focuses on temporal abstractions in planning andlearning, allowing temporally-extended actions to be transferred between tasks.In this paper we combine one method for hierarchical reinforcement learning -the options framework - with deep Q-networks (DQNs) through the use ofdifferent "option heads" on the policy network, and a supervisory network forchoosing between the different options. We show that in a domain where we haveprior knowledge of the mapping between states and options, our augmented DQNachieves a policy competitive with that of a standard DQN, but with much lowersample complexity. This is achieved through a straightforward architecturaladjustment to the DQN, as well as an additional supervised neural network.

Working paper

Othman BA, Greenwood C, Abuelela AF, Bharath AA, Chen S, Theodorou I, Douglas T, Uchida M, Ryan M, Merzaban JS, Porter AEet al., 2016, Targeted Cancer Therapy: Correlative Light-Electron Microscopy Shows RGD-Targeted ZnO Nanoparticles Dissolve in the Intracellular Environment of Triple Negative Breast Cancer Cells and Cause Apoptosis with Intratumor Heterogeneity (Adv. Healthcare Mater. 11/2016)., Advanced Healthcare Materials, Vol: 5, Pages: 1248-1248, ISSN: 2192-2640

On page 1310 J. S. Merzaban, A. E. Porter, and co-workers present fluorescently labeled RGD-targeted ZnO nanoparticles (NPs; green) for the targeted delivery of cytotoxic ZnO to integrin αvβ3 receptors expressed on triple negative breast cancer cells. Correlative light-electron microscopy shows that NPs dissolve into ionic Zn(2+) (blue) upon uptake and cause apoptosis (red) with intra-tumor heterogeneity, thereby providing a possible strategy for targeted breast cancer therapy. Cover design by Ivan Gromicho.

Journal article

Othman BA, Greenwood C, Abuelela AF, Bharath AA, Chen S, Theodorou I, Douglas T, Uchida M, Ryan M, Merzaban JS, Porter AEet al., 2016, Correlative light-electron microscopy shows RGD-targeted ZnO nanoparticles dissolve in the intracellular environment of triple negative breast cancer cells and cause apoptosis with intra-tumor heterogeneity, Advanced Healthcare Materials, Vol: 5, Pages: 1310-1325, ISSN: 2192-2640

ZnO nanoparticles (NPs) are reported to show a high degree of cancer cell selectivity with potential use in cancer imaging and therapy. Questions remain about the mode by which the ZnO NPs cause cell death, whether they exert an intra- or extra-35 cellular effect, and the resistance among different cancer cell types to ZnO NP exposure. The present study quantified the variability between the cellular toxicity, dynamics of cellular uptake and dissolution of bare and RGD (Arg-Gly-Asp)-targeted ZnO NPs by MDA-MB-231 cells. Compared to bare ZnO NPs, RGD-targeting of the ZnO NPs to integrin αvβ3 receptors expressed on MDA-MB-231 cells appeared to increase the toxicity of the ZnO NPs to breast cancer cells at lower doses. Confocal microscopy of live MDA-MB-231 cells confirmed uptake of both classes of ZnO NPs with a commensurate rise in intracellular Zn2+ concentration prior to cell death. The response of the cells within the population to intracellular Zn2+ was highly heterogeneous. In addition, the results emphasize the utility of dynamic and quantitative imaging in understanding cell uptake and processing of targeted therapeutic ZnO NPs at the cellular level by heterogeneous cancer cell populations, which could be crucial for the development of optimized treatment strategies.

Journal article

Ma ZB, Yang Y, Liu YX, Bharath AAet al., 2016, Recurrently decomposable 2-D convolvers for FPGA-based digital image processing, IEEE Transactions on Circuits and Systems, Vol: 63, Pages: 979-983, ISSN: 1549-7747

Two-dimensional (2-D) convolution is a widely used operation in image processing and computer vision, characterized by intensive computation and frequent memory accesses. Previous efforts to improve the performance of field-programmable gate array (FPGA) convolvers focused on the design of buffering schemes and on minimizing the use of multipliers. A recently proposed recurrently decomposable (RD) filter design method can reduce the computational complexity of 2-D convolutions by splitting the convolution between an image and a large mask into a sequence of convolutions using several smaller masks. This brief explores how to efficiently implement RD based 2-D convolvers using FPGA. Three FPGA architectures are proposed based on RD filters, each with a different buffering scheme. The conclusion is that RD based architectures achieve higher area efficiency than other previously reported state-of-the-art methods, especially for larger convolution masks. An area efficiency metric is also suggested, which allows the most appropriate architecture to be selected.

Journal article

Charalambous CC, Bharath AA, 2016, A data augmentation methodology for training machine/deep learning gait recognition algorithms, British Machine Vision Conference 2016, BMVC 2016, Vol: 2016-September, Pages: 110.1-110.12

There are several confounding factors that can reduce the accuracy of gait recognition systems. These factors can reduce the distinctiveness, or alter the features used to characterise gait; they include variations in clothing, lighting, pose and environment, such as the walking surface. Full invariance to all confounding factors is challenging in the absence of high-quality labelled training data. We introduce a simulation-based methodology and a subject-specific dataset which can be used for generating synthetic video frames and sequences for data augmentation. With this methodology, we generated a multi-modal dataset. In addition, we supply simulation files that provide the ability to simultaneously sample from several confounding variables. The basis of the data is real motion capture data of subjects walking and running on a treadmill at different speeds. Results from gait recognition experiments suggest that information about the identity of subjects is retained within synthetically generated examples. The dataset and methodology allow studies into fully-invariant identity recognition spanning a far greater number of observation conditions than would otherwise be possible.

Journal article

Creswell A, Bharath AA, 2016, Adversarial Training For Sketch Retrieval., CoRR, Vol: abs/1607.02748

Journal article

Rivera-Rubio J, Bharath A, Alexiou I, 2015, Appearance-based indoor localization: A comparison of patch descriptor performance

Appearance-based visual localisation from wearable and hand-held cameras.

Software

Rivera-Rubio J, Bharath A, 2015, Indoor Localisation with Regression Networks and Place Cell Models

First version of the artificial place cell models, with evaluation support and normalization stub.

Software

Rivera-Rubio J, Alexiou I, Bharath AA, 2015, Appearance-based indoor localization: a comparison of patch descriptor performance, Pattern Recognition Letters, Vol: 66, Pages: 109-117, ISSN: 1872-7344

Vision is one of the most important of the senses, and humans use it extensively during navigation. We evaluated different types of image and video frame descriptors that could be used to determine distinctive visual landmarks for localizing a person based on what is seen by a camera that they carry. To do this, we created a database containing over 3 km of video-sequences with ground-truth in the form of distance travelled along different corridors. Using this database, the accuracy of localization—both in terms of knowing which route a user is on—and in terms of position along a certain route, can be evaluated. For each type of descriptor, we also tested different techniques to encode visual structure and to search between journeys to estimate a user’s position. The techniques include single-frame descriptors, those using sequences of frames, and both color and achromatic descriptors. We found that single-frame indexing worked better within this particular dataset. This might be because the motion of the person holding the camera makes the video too dependent on individual steps and motions of one particular journey. Our results suggest that appearance-based information could be an additional source of navigational data indoors, augmenting that provided by, say, radio signal strength indicators (RSSIs). Such visual information could be collected by crowdsourcing low-resolution video feeds, allowing journeys made by different users to be associated with each other, and location to be inferred without requiring explicit mapping. This offers a complementary approach to methods based on simultaneous localization and mapping (SLAM) algorithms.

Journal article

Liu Y, Yang Y, Bharath A, 2015, Recurrently decomposable 2-D filters, Journal of Computational Information Systems, Vol: 11, Pages: 1773-1779, ISSN: 1553-9105

The study of spatial convolution is returning to relevance due to recent rapid developments in deep learning theory and the corresponding growth in the use of convolution neural networks. However, the finite-impulse response filters that are widely used for spatial convolution are usually both numerous and non-separable, making the associatd computational burden higher. In this letter, we exploit the property that large 2-D filters can be computed either as the combinations of 1-D filters (conventional Cartesianseparable case) or the combinations of smaller 2-D ones, which we call recurrently-decomposable filters. The proposed new nested structure greatly reduces the computational complexity at no cost in terms of performance. We describe the 2-D filter decomposition in terms of several unconstrained optimization problems and give solutions to these problems. Finally, we conclude the paper with the application to a 2-D fan filter to demonstrate the validity.

Journal article

Charalambous CC, Bharath AA, 2015, Viewing angle effect on gait recognition using joint kinematics

Gait offers some advantages as a biometric; it can be applied at a distance, and without a subject's cooperation. There are several factors that influence the accuracy of gait recognition methods. This study investigates only the effect of relative camera-subject viewing angle. We focus on model-based approaches, particularly using joint kinematics. Our results support intuition: capturing the subject from side views provides the best accuracy, and accuracy decays in moving away from fully side-on views, both in the longitudinal and latitudinal directions. Subjects are well-separable when captured from an elevation angle of up to 65°.

Conference paper

Rivera-Rubio J, Alexiou I, Bharath AA, 2015, Indoor Localisation with Regression Networks and Place Cell Models., Publisher: BMVA Press, Pages: 147.1-147.1

Conference paper

Rivera-Rubio J, Alexiou I, Bharath AA, 2015, Associating Locations Between Indoor Journeys from Wearable Cameras, 13th European Conference on Computer Vision (ECCV), Publisher: SPRINGER-VERLAG BERLIN, Pages: 29-44, ISSN: 0302-9743

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00101993&person=true&page=3&respub-action=search.html