Variational Adversarial Active Learning

Our model learns the distribution of labeled data in a latent space using a VAE optimized using both reconstruction and adversarial losses. A binary classifier predicts unlabeled examples and sends them to an oracle for annotations. The VAE is trained to fool the adversarial network to believe that all the examples are from the labeled data while the adversarial classifier is trained to differentiate labeled from unlabeled samples.

代码地址:https://github.com/sinhasam/vaal

Active learning aims to develop label-efficient algorithms by sampling the most representative queries to be labeled by an oracle. We describe a pool-based semi- supervised active learning algorithm that implicitly learns this sampling mechanism in an adversarial manner. Our method learns a latent space using a variational autoen- coder (VAE) and an adversarial network trained to discriminate between unlabeled and labeled data. The mini-max game between the VAE and the adversarial network is played such that while the VAE tries to trick the adversarial network into predicting that all data points are from the la- beled pool, the adversarial network learns how to discrim- inate between dissimilarities in the latent space. We exten- sively evaluate our method on various image classification and semantic segmentation benchmark datasets and estab- lish a new state of the art on CIFAR10/100, Caltech-256, ImageNet, Cityscapes, and BDD100K. Our results demon- strate that our adversarial approach learns an effective low dimensional latent space in large-scale settings and pro- vides for a computationally efficient sampling method.

主动学习的目的是通过对最有代表性的查询进行抽样,从而开发出标记效率高的算法。我们描述了一个pool-based半监督主动学习算法,它以一种对抗的方式隐式地学习这种采样机制。我们的方法使用一个可变的自编码器和一个训练用于区分标记了的和未标记的数据的对抗网络,来学习一种潜在空间。VAE和对抗网络的min-max博弈是这样进行的:VAE试图欺骗对抗网络预测所有数据点都来自标记的池子,对抗网络学习如何在潜在空间中区分不同不同。我们在图像分类和语义分割数据集上进行评估。

发表评论

邮箱地址不会被公开。 必填项已用*标注