Training Generative Adversarial Networks with Limited Data

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.

https://arxiv.org/abs/2006.06676

使用极少量数据训练GAN会导致判别器过拟合从而训练无法收敛。我们提出一个自适应的判别器训练机制以有效地稳定在小规模数据集上的训练过程。我们的方法不需要修改损失函数或者模型架构,并且可以应用在从头训练或者fine-tuning的阶段。实验结果说明在数个数据集上我们的方法可以以几千张的训练图像达到原始StyleGAN2的性能。我们希望我们的研究可以打开一个新的GAN应用领域,并且我们还展示了在CIFAR10这样的小数据集上我们可以有效地将FID从5.59降至2.42.

发表评论

邮箱地址不会被公开。 必填项已用*标注