Training Generative Adversarial Networks in One Stage

Generative Adversarial Networks (GANs) have demonstrated unprecedented success in various image generation tasks. The encouraging results, however, come at the price of a cumbersome training process, during which the generator and discriminator are alternately updated in two stages. In this paper, we investigate a general training scheme that enables training GANs efficiently in only one stage. Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Symmetric GANs and Asymmetric GANs, and introduce a novel gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate the training effort. Computational analysis and experimental results on several datasets and various network architectures demonstrate that, the proposed one-stage training scheme yields a solid 1.5× acceleration over conventional training schemes, regardless of the network architectures of the generator and discriminator. Furthermore, we show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.

https://arxiv.org/pdf/2103.00430.pdf

生成对抗网络(GANs)已经在不同的图像生成任务中展示了它史无前例的成功。但是这样的成功是来自于复杂的训练流程,这样的训练流程由生成器和判别器通过两阶段交替更新完成。在本文中,我们提出了一种GANs单步训练流程。根据对抗损失类型分类,我们把GANs分成对称GANs和非对称GANs两种,同时我们还提出了一种新的梯度分解方法去统一两种GANs使得我们可以在单步内完成训练。计算量分析和实验结果表明单步训练的GANs可以得到1.5倍的加速。

发表评论

邮箱地址不会被公开。 必填项已用*标注