Differentiable Augmentation for Data-Efficient GAN Training

AK on Twitter: "Differentiable Augmentation for Data-Efficient GAN Training  pdf: https://t.co/isdfB4wwvY abs: https://t.co/XTUSS6txOH github:  https://t.co/CZt3YeJ5Gd… https://t.co/18uPhsI7KT"

The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms.

http://arxiv.org/abs/2006.10738

本文提出了为训练GAN提出了一种可微的数据扩增方法。一般认为,GAN的判别器会对特定的训练集产生记忆,这将会影响GAN的性能。传统的数据扩增方法主要作用训练集数据的分布,这样的效果有限。通过同时对real和fake数据,在他们通过生成器和判别器之前进行扩增,本文提出的可微的数据扩增方法可以有效地稳定训练过程,使得网络更好地收敛。本文所的简单的扩增方法,包括变换[在[-1/8, 1/8]图像比例之间缩放,缩放后的图像补零],裁剪[随机以一半的尺寸裁切图像],彩色变换[在[-0.5, 0.5]之间随机改变亮度,在[0.5, 1.5]之间随机改变对比度,在[0, 2]之间改变饱和度]。

发表评论

邮箱地址不会被公开。 必填项已用*标注