Imperial College London

Professor Anil Anthony Bharath

Faculty of EngineeringDepartment of Bioengineering

Academic Director (Singapore)
 
 
 
//

Contact

 

+44 (0)20 7594 5463a.bharath Website

 
 
//

Location

 

4.12Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Creswell:2018:10.1109/TNNLS.2018.2875194,
author = {Creswell, A and Bharath, A},
doi = {10.1109/TNNLS.2018.2875194},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
title = {Inverting the generator of a generative adversarial network},
url = {http://dx.doi.org/10.1109/TNNLS.2018.2875194},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).
AU - Creswell,A
AU - Bharath,A
DO - 10.1109/TNNLS.2018.2875194
PY - 2018///
SN - 2162-2388
TI - Inverting the generator of a generative adversarial network
T2 - IEEE Transactions on Neural Networks and Learning Systems
UR - http://dx.doi.org/10.1109/TNNLS.2018.2875194
UR - http://hdl.handle.net/10044/1/65306
ER -