Generative Adversarial Networks (GANs) are deep-learning-based generative models. This paper presents three methods to infer the input to the generator of auxiliary classifier generative adversarial networks (ACGANs), which are a type of conditional GANs. The first two methods, named i-ACGAN- r and i-ACGAN-d, are 'inverting' methods, which obtain an inverse mapping from an image to the class label and the latent sample. By contrast, the third method, referred to as i-ACGAN-e, directly infers both the class label and the latent sample by introducing an encoder into an ACGAN. The three methods were evaluated on two natural scene datasets, using two performance measures: the class recovery accuracy and the image reconstruction error. Experimental results show that i-ACGAN-e outperforms the other two methods in terms of the class recovery accuracy. However, the images generated by the other two methods have smaller image reconstruction errors. The source code is publicly available from https://github.com/XMPeng/Infer-Input-ACGAN.