Skip to main content
placeholder image

Distributed GAN: Toward a Faster Reinforcement-Learning-Based Architecture Search

Journal Article


Abstract


  • In the existing reinforcement learning (RL)-based neural architecture search (NAS) methods for a generative adversarial network (GAN), both the generator and the discriminator architecture are usually treated as the search objects. In this article, we take a different perspective to propose an approach by treating the generator as the search objective and the discriminator as the judge to evaluate the performance of the generator architecture. Consequently, we can convert this NAS problem to a GAN-style problem, similar to using a controller to generate sequential data via reinforcement learning in a sequence GAN, except that the controller in our methods generates serialized data information of architecture. Furthermore, we adopt an RL-based distributed search method to update the controller parameters θ. Generally, the reward value is calculated after the whole architecture searched, but as another novelty in this article, we employ the reward shaping method to judge the intermediate reward and assign it to every cell in the architecture to encourage the diversity and the integrity of all cells. The main contribution of this article is to provide a novel performance estimation mechanism, which could speed up the efficiency of architecture search, and improve the searching results with specific supplementary strategies. Crucially, this estimation mechanism can be applied to most RL-based NAS methods for the GAN. The experiments demonstrate that our methods achieve satisfactory results against our design objectives.

Publication Date


  • 2022

Citation


  • Shi, J., Fan, Y., Zhou, G., & Shen, J. (2022). Distributed GAN: Toward a Faster Reinforcement-Learning-Based Architecture Search. IEEE Transactions on Artificial Intelligence, 3(3), 391-401. doi:10.1109/TAI.2021.3133509

Scopus Eid


  • 2-s2.0-85132960656

Web Of Science Accession Number


Start Page


  • 391

End Page


  • 401

Volume


  • 3

Issue


  • 3

Abstract


  • In the existing reinforcement learning (RL)-based neural architecture search (NAS) methods for a generative adversarial network (GAN), both the generator and the discriminator architecture are usually treated as the search objects. In this article, we take a different perspective to propose an approach by treating the generator as the search objective and the discriminator as the judge to evaluate the performance of the generator architecture. Consequently, we can convert this NAS problem to a GAN-style problem, similar to using a controller to generate sequential data via reinforcement learning in a sequence GAN, except that the controller in our methods generates serialized data information of architecture. Furthermore, we adopt an RL-based distributed search method to update the controller parameters θ. Generally, the reward value is calculated after the whole architecture searched, but as another novelty in this article, we employ the reward shaping method to judge the intermediate reward and assign it to every cell in the architecture to encourage the diversity and the integrity of all cells. The main contribution of this article is to provide a novel performance estimation mechanism, which could speed up the efficiency of architecture search, and improve the searching results with specific supplementary strategies. Crucially, this estimation mechanism can be applied to most RL-based NAS methods for the GAN. The experiments demonstrate that our methods achieve satisfactory results against our design objectives.

Publication Date


  • 2022

Citation


  • Shi, J., Fan, Y., Zhou, G., & Shen, J. (2022). Distributed GAN: Toward a Faster Reinforcement-Learning-Based Architecture Search. IEEE Transactions on Artificial Intelligence, 3(3), 391-401. doi:10.1109/TAI.2021.3133509

Scopus Eid


  • 2-s2.0-85132960656

Web Of Science Accession Number


Start Page


  • 391

End Page


  • 401

Volume


  • 3

Issue


  • 3