# ViewAL: Active Learning With Viewpoint Entropy for Semantic Segmentation

We propose ViewAL , a novel active learning strategy for semantic segmentation that exploits viewpoint consis- tency in multi-view datasets. Our core idea is that incon- sistencies in model predictions across viewpoints provide a very reliable measure of uncertainty and encourage the model to perform well irrespective of the viewpoint under which objects are observed. To incorporate this uncertainty measure, we introduce a new viewpoint entropy formula- tion, which is the basis of our active learning strategy. In addition, we propose uncertainty computations on a super- pixel level, which exploits inherently localized signal in the segmentation task, directly lowering the annotation costs. This combination of viewpoint entropy and the use of su- perpixels allows to efficiently select samples that are highly informative for improving the network. We demonstrate that our proposed active learning strategy not only yields the best-performing models for the same amount of required labeled data, but also significantly reduces labeling effort. Our method achieves 95% ofmaximum achievable network performance using only 7%, 17%, and 24% labeled data on SceneNet-RGBD, ScanNet, and Matterport3D, respec- tively. On these datasets, the best state-of-the-art method achieves the same performance with 14%, 27% and 33% la- beled data. Finally, we demonstrate that labeling using su- perpixels yields the same quality ofground-truth compared to labeling whole images, but requires 25% less time.

Active learning aims to develop label-efficient algorithms by sampling the most representative queries to be labeled by an oracle. We describe a pool-based semi- supervised active learning algorithm that implicitly learns this sampling mechanism in an adversarial manner. Our method learns a latent space using a variational autoen- coder (VAE) and an adversarial network trained to discriminate between unlabeled and labeled data. The mini-max game between the VAE and the adversarial network is played such that while the VAE tries to trick the adversarial network into predicting that all data points are from the la- beled pool, the adversarial network learns how to discrim- inate between dissimilarities in the latent space. We exten- sively evaluate our method on various image classification and semantic segmentation benchmark datasets and estab- lish a new state of the art on CIFAR10/100, Caltech-256, ImageNet, Cityscapes, and BDD100K. Our results demon- strate that our adversarial approach learns an effective low dimensional latent space in large-scale settings and pro- vides for a computationally efficient sampling method.

# Learning Loss for Active Learning

The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named “loss prediction module,” to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.

GAP是指全局平均池化，训练过程就是联合优化目标模型与损失预测模型的损失函数。这里比较重要的一点就是损失预测模型的损失函数的设计，最直接的想法就是考虑损失预测模型的预测值与实际损失值之间的均方误差。这种做法显然是不合理的，因为迭代过程中的损失值是一个变化的过程，这就会导致损失预测模型的训练过程中对应的数据的标签不一致！导致预测效果很差。因此作者就提出数据对的思想来确定损失预测模型的损失函数。

# A Survey of Deep Active Learning

Active learning (AL) attempts to maximize a model’s performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance
characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely accorded the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to the large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.) Therefore, AL is gradually coming to receive the attention it is due. It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotations, while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DAL from an application perspective. Finally, we discuss the confusion and problems associated with DAL and provide some possible development directions.