Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data. Few-shot learning aims for
optimization methods and models that can learn efficiently to recognize patterns in the low data regime. Self-supervised learning focuses instead on unlabeled data and looks into it for the supervisory signal to feed high capacity deep neural networks. In this work we exploit the complementarity of these two domains and propose an approach for improving few-shot learning through self-supervision. We use self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples. Through self-supervision, our approach can be naturally extended towards using diverse unlabeled data from other datasets in the few-shot setting. We report consistent improvements across an array of architectures, datasets and self-supervision techniques.
Few-shot learning和自监督学习解决同一问题的不同方面：如何用少量且无标签的数据去训练模型。Few-shot learning目的是优化方法和模型，这些方法和模型可以有效地学习从而识别低数据状态下的模式。自监督学习将重点关注无标签的数据并且寻找监督信号以提供给深度神经网络。本工作中我们利用这两个领域的互补性，并且提出了一种通过自监督来改善few-shot learning的方法。我们在few-shot learning pipeline中将自监督作为一个辅助任务，使得特征提取器去学习更丰富，更可传递的视觉表示，与此同时仍然使用少量带标注的样本。通过自监督，我们的方法可以自然地扩展到在few-shot设置中使用来自其他数据集的各种未标记数据。