Publications from our Researchers

Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.  

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • JOURNAL ARTICLE
    Charalambous CC, Bharath AA,

    A data augmentation methodology for training machine/deep learning gait recognition algorithms

    There are several confounding factors that can reduce the accuracy of gaitrecognition systems. These factors can reduce the distinctiveness, or alter thefeatures used to characterise gait, they include variations in clothing,lighting, pose and environment, such as the walking surface. Full invariance toall confounding factors is challenging in the absence of high-quality labelledtraining data. We introduce a simulation-based methodology and asubject-specific dataset which can be used for generating synthetic videoframes and sequences for data augmentation. With this methodology, we generateda multi-modal dataset. In addition, we supply simulation files that provide theability to simultaneously sample from several confounding variables. The basisof the data is real motion capture data of subjects walking and running on atreadmill at different speeds. Results from gait recognition experimentssuggest that information about the identity of subjects is retained withinsynthetically generated examples. The dataset and methodology allow studiesinto fully-invariant identity recognition spanning a far greater number ofobservation conditions than would otherwise be possible.

  • JOURNAL ARTICLE
    Creswell A, Bharath AA,

    Task Specific Adversarial Cost Function

    The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.

  • JOURNAL ARTICLE
    Creswell A, Bharath AA,

    Denoising Adversarial Autoencoders

    Unsupervised learning is of growing interest because it unlocks the potentialheld in vast amounts of unlabelled data to learn useful representations forinference. Autoencoders, a form of generative model, may be trained by learningto reconstruct unlabelled input data from a latent representation space. Morerobust representations may be produced by an autoencoder if it learns torecover clean input samples from corrupted ones. Representations may be furtherimproved by introducing regularisation during training to shape thedistribution of the encoded data in latent space. We suggest denoisingadversarial autoencoders, which combine denoising and regularisation, shapingthe distribution of latent space using adversarial training. We introduce anovel analysis that shows how denoising may be incorporated into the trainingand sampling of adversarial autoencoders. Experiments are performed to assessthe contributions that denoising makes to the learning of representations forclassification and sample synthesis. Our results suggest that autoencoderstrained using a denoising criterion achieve higher classification performance,and can synthesise samples that are more consistent with the input data thanthose trained without a corruption process.

  • JOURNAL ARTICLE
    Curcin V, Guo Y, Gilardoni F,

    Scientific Workflow Applied to Nano-and Material Sciences

  • CONFERENCE PAPER
    Arulkumaran K, Dilokthanakul N, Shanahan M, Bharath AA, Arulkumaran K, Dilokthanakul N, Shanahan M, Bharath AA, Arulkumaran K, Dilokthanakul N, Shanahan M, Bharath AAet al., 2016,

    Classifying Options for Deep Reinforcement Learning.

    , Deep Reinforcement Learning: Frontiers and Challenges, IJAC 2016, Publisher: IJCAI

    In this paper we combine one method for hierarchical reinforcement learning -the options framework - with deep Q-networks (DQNs) through the use ofdifferent "option heads" on the policy network, and a supervisory network forchoosing between the different options. We utilise our setup to investigate theeffects of architectural constraints in subtasks with positive and negativetransfer, across a range of network capacities. We empirically show that ouraugmented DQN has lower sample complexity when simultaneously learning subtaskswith negative transfer, without degrading performance when learning subtaskswith positive transfer.

  • JOURNAL ARTICLE
    Bertone G, Calore F, Caron S, Ruiz R, Kim JS, Trotta R, Weniger C, Bertone G, Calore F, Caron S, Ruiz R, Kim JS, Trotta R, Weniger C, Bertone G, Calore F, Caron S, Ruiz R, Kim JS, Trotta R, Weniger C, Bertone G, Calore F, Caron S, Austri RRD, Kim JS, Trotta R, Weniger C, Bertone G, Calore F, Caron S, Austri RRD, Kim JS, Trotta R, Weniger Cet al., 2016,

    Global analysis of the pMSSM in light of the Fermi GeV excess: prospects for the LHC Run-II and astroparticle experiments

    , JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS, Vol: 2016, Pages: 037-037, ISSN: 1475-7516

    We present a new global fit of the 19-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-19) that complies with all the latest experimental results from dark matter indirect, direct and accelerator dark matter searches. We show that the model provides a satisfactory explanation of the excess of gamma rays from the Galactic centre observed by the Fermi Large Area Telescope, assuming that it is produced by the annihilation of neutralinos in the Milky Way halo. We identify two regions that pass all the constraints: the first corresponds to neutralinos with a mass 0∼ 80-10 GeV annihilating into WW with a branching ratio of 95%; the second to heavier neutralinos, with mass 0∼ 180-20 GeV annihilating into t with a branching ratio of 87%. We show that neutralinos compatible with the Galactic centre GeV excess will soon be within the reach of LHC run-II - notably through searches for charginos and neutralinos, squarks and light smuons - and of Xenon1T, thanks to its unprecedented sensitivity to spin-dependent cross-section off neutrons.

  • JOURNAL ARTICLE
    Ma Z-B, Yang Y, Liu Y-X, Bharath AA, Ma Z-B, Yang Y, Liu Y-X, Bharath AA, Ma ZB, Yang Y, Liu YX, Bharath AAet al., 2016,

    Recurrently Decomposable 2-D Convolvers for FPGA-Based Digital Image Processing

    , IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, Vol: 63, Pages: 979-983, ISSN: 1549-7747

    Two-dimensional (2-D) convolution is a widely used operation in image processing and computer vision, characterized by intensive computation and frequent memory accesses. Previous efforts to improve the performance of field-programmable gate array (FPGA) convolvers focused on the design of buffering schemes and on minimizing the use of multipliers. A recently proposed recurrently decomposable (RD) filter design method can reduce the computational complexity of 2-D convolutions by splitting the convolution between an image and a large mask into a sequence of convolutions using several smaller masks. This brief explores how to efficiently implement RD based 2-D convolvers using FPGA. Three FPGA architectures are proposed based on RD filters, each with a different buffering scheme. The conclusion is that RD based architectures achieve higher area efficiency than other previously reported state-of-the-art methods, especially for larger convolution masks. An area efficiency metric is also suggested, which allows the most appropriate architecture to be selected.

  • CONFERENCE PAPER
    Heinis T, Ailamaki A, Heinis T, Ailamaki Aet al., 2015,

    Reconsolidating Data Structures.

    , EDBT/ICDT 2015 Joint Conference, Publisher: OpenProceedings.org, Pages: 665-670
  • JOURNAL ARTICLE
    Heinis T, Ham DA, Heinis T, Ham DA, Heinis T, Ham DA, Heinis T, Ham DAet al., 2015,

    On-the-Fly Data Synopses: Efficient Data Exploration in the Simulation Sciences

    , SIGMOD RECORD, Vol: 44, Pages: 23-28, ISSN: 0163-5808

    As a consequence of ever more powerful computing hardware and increasingly precise instruments, our capacity to produce scientific data by far outpaces our ability to efficiently store and analyse it. Few of today's tools to analyse scientific data are able to handle the deluge captured by instruments or generated by supercomputers. In many scenarios, however, it suffices to analyse a small subset of the data in detail. What scientists analysing the data consequently need are efficient means to explore the full dataset using approximate query results and to identify the subsets of interest. Once found, interesting areas can still be scrutinised using a precise, but also more time-consuming analysis. Data synopses fit the bill as they provide fast (but approximate) query execution on massive amounts of data. Generating data synopses after the data is stored, however, requires us to analyse all the data again, and is thus inefficient. What we propose is to generate the synopsis for simulation applications on-the-fly when the data is captured. Doing so typically means changing the simulation or data capturing code and is tedious and typically just a one-off solution that is not generally applicable. In contrast, our vision gives scientists a high-level language and the infrastructure needed to generate code that creates data synopses on-the-fly, as the simulation runs. In this paper we discuss the data management challenges associated with our approach.

  • CONFERENCE PAPER
    Karpathiotakis M, Alagiannis I, Heinis T, Branco M, Ailamaki A, Karpathiotakis M, Alagiannis I, Heinis T, Branco M, Ailamaki Aet al., 2015,

    Just-In-Time Data Virtualization: Lightweight Data Management with ViDa.

    , CIDR 2015, Seventh Biennial Conference on Innovative Data Systems Research, Publisher: www.cidrdb.org
  • JOURNAL ARTICLE
    Rivera-Rubio J, Alexiou I, Bharath AA, 2015,

    Appearance-based indoor localization: A comparison of patch descriptor performance.

    , Pattern Recognition Letters, Vol: 66, Pages: 109-117
  • CONFERENCE PAPER
    Rivera-Rubio J, Alexiou I, Bharath AA, 2015,

    Indoor Localisation with Regression Networks and Place Cell Models.

    , Publisher: BMVA Press, Pages: 147.1-147.1
  • CONFERENCE PAPER
    Rivera-Rubio J, Alexiou I, Bharath AA, Rivera-Rubio J, Alexiou I, Bharath AAet al., 2015,

    Associating Locations Between Indoor Journeys from Wearable Cameras

    , 13th European Conference on Computer Vision (ECCV), Publisher: SPRINGER-VERLAG BERLIN, Pages: 29-44, ISSN: 0302-9743
  • JOURNAL ARTICLE
    Rivera-Rubio J, Alexiou I, Bharath AA, Rivera-Rubio J, Alexiou I, Bharath AA, Rivera-Rubio J, Alexiou I, Bharath AA, Rivera-Rubio J, Alexiou I, Bharath AA, Rivera-Rubio J, Alexiou I, Bharath AAet al., 2015,

    Appearance-based indoor localization: A comparison of patch descriptor performance

    , PATTERN RECOGNITION LETTERS, Vol: 66, Pages: 109-117, ISSN: 0167-8655

    Vision is one of the most important of the senses, and humans use itextensively during navigation. We evaluated different types of image and videoframe descriptors that could be used to determine distinctive visual landmarksfor localizing a person based on what is seen by a camera that they carry. Todo this, we created a database containing over 3 km of video-sequences withground-truth in the form of distance travelled along different corridors. Usingthis database, the accuracy of localization - both in terms of knowing whichroute a user is on - and in terms of position along a certain route, can beevaluated. For each type of descriptor, we also tested different techniques toencode visual structure and to search between journeys to estimate a user'sposition. The techniques include single-frame descriptors, those usingsequences of frames, and both colour and achromatic descriptors. We found thatsingle-frame indexing worked better within this particular dataset. This mightbe because the motion of the person holding the camera makes the video toodependent on individual steps and motions of one particular journey. Ourresults suggest that appearance-based information could be an additional sourceof navigational data indoors, augmenting that provided by, say, radio signalstrength indicators (RSSIs). Such visual information could be collected bycrowdsourcing low-resolution video feeds, allowing journeys made by differentusers to be associated with each other, and location to be inferred withoutrequiring explicit mapping. This offers a complementary approach to methodsbased on simultaneous localization and mapping (SLAM) algorithms.

  • CONFERENCE PAPER
    Tauheed F, Heinis T, Ailamaki A, Heinis T, Ailamaki A, Tauheed Fet al., 2015,

    THERMAL-JOIN: A Scalable Spatial Join for Dynamic Workloads.

    , ACM SIGMOD International Conference on Management of Data (SIGMOD ’15), Publisher: ACM, Pages: 939-950

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=607&limit=15&respub-action=search.html Current Millis: 1503284503490 Current Time: Mon Aug 21 04:01:43 BST 2017