Skip to main content
placeholder image

Determining learning direction via multi-controller model for stably searching generative adversarial networks

Journal Article


Abstract


  • The data generated by Generative Adversarial Network (GAN) inevitably contains noise, which can be reduced by searching and optimizing the architecture of GAN. To search for generative adversarial networks architectures stably, a neural architecture search (NAS) method, StableAutoGAN, is proposed based on the existing algorithm, AutoGAN. The stability of conventional reinforcement learning (RL)-based NAS methods for GAN is adversely influenced by the uncertainty of direction, where the controller will go forward once receiving inaccurate rewards. In StableAutoGAN, a multi-controller model is employed to mitigate this problem via comparing the performance of controllers after receiving rewards. During the search process, each controller independently learns the sampling policy. Meanwhile, the learning effect is measured by the credibility score, which further determines the usage of controllers. Our experiments show that the standard deviation of Frchet Inception Distance (FID) scores of the GANs discovered by StableAutoGAN is approximately 1/16 and 1/8 of that by AutoGAN on CIFAR-10 and on STL-10 respectively, while the effects remain similar to AutoGAN.

Publication Date


  • 2021

Citation


  • Fan, Y., Zhou, Q., Zhang, W., Bao, S., & Shen, J. (2021). Determining learning direction via multi-controller model for stably searching generative adversarial networks. Neurocomputing, 464, 37-47. doi:10.1016/j.neucom.2021.08.070

Scopus Eid


  • 2-s2.0-85114130523

Start Page


  • 37

End Page


  • 47

Volume


  • 464

Abstract


  • The data generated by Generative Adversarial Network (GAN) inevitably contains noise, which can be reduced by searching and optimizing the architecture of GAN. To search for generative adversarial networks architectures stably, a neural architecture search (NAS) method, StableAutoGAN, is proposed based on the existing algorithm, AutoGAN. The stability of conventional reinforcement learning (RL)-based NAS methods for GAN is adversely influenced by the uncertainty of direction, where the controller will go forward once receiving inaccurate rewards. In StableAutoGAN, a multi-controller model is employed to mitigate this problem via comparing the performance of controllers after receiving rewards. During the search process, each controller independently learns the sampling policy. Meanwhile, the learning effect is measured by the credibility score, which further determines the usage of controllers. Our experiments show that the standard deviation of Frchet Inception Distance (FID) scores of the GANs discovered by StableAutoGAN is approximately 1/16 and 1/8 of that by AutoGAN on CIFAR-10 and on STL-10 respectively, while the effects remain similar to AutoGAN.

Publication Date


  • 2021

Citation


  • Fan, Y., Zhou, Q., Zhang, W., Bao, S., & Shen, J. (2021). Determining learning direction via multi-controller model for stably searching generative adversarial networks. Neurocomputing, 464, 37-47. doi:10.1016/j.neucom.2021.08.070

Scopus Eid


  • 2-s2.0-85114130523

Start Page


  • 37

End Page


  • 47

Volume


  • 464