Skip to main content
placeholder image

Infer the Input to the Generator of Auxiliary Classifier Generative Adversarial Networks

Conference Paper


Abstract


  • Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are 'inverting' methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.

Publication Date


  • 2020

Citation


  • Peng, X., Bouzerdoum, A., & Phung, S. L. (2020). Infer the Input to the Generator of Auxiliary Classifier Generative Adversarial Networks. In Proceedings - International Conference on Image Processing, ICIP Vol. 2020-October (pp. 76-80). doi:10.1109/ICIP40778.2020.9190658

Scopus Eid


  • 2-s2.0-85098643986

Web Of Science Accession Number


Start Page


  • 76

End Page


  • 80

Volume


  • 2020-October

Abstract


  • Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are 'inverting' methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.

Publication Date


  • 2020

Citation


  • Peng, X., Bouzerdoum, A., & Phung, S. L. (2020). Infer the Input to the Generator of Auxiliary Classifier Generative Adversarial Networks. In Proceedings - International Conference on Image Processing, ICIP Vol. 2020-October (pp. 76-80). doi:10.1109/ICIP40778.2020.9190658

Scopus Eid


  • 2-s2.0-85098643986

Web Of Science Accession Number


Start Page


  • 76

End Page


  • 80

Volume


  • 2020-October