标签归档:Image Regcognition

MLP-Mixer: An all-MLP Architecture for Vision

The strong performance of vision transformers on image classification and other vision tasks is often attributed to the design of their multi-head attention layers. However, the extent to which attention is responsible for this strong performance remains unclear. In this short report, we ask: is the attention layer even necessary? Specifically, we replace the attention layer in a vision transformer with a feed-forward layer applied over the patch dimension. The resulting architecture is simply a series of feed-forward layers applied over the patch and feature dimensions in an alternating fashion. In experiments on ImageNet, this architecture performs surprisingly well: a ViT/DeiT-base-sized model obtains 74.9% top-1 accuracy, compared to 77.9% and 79.9% for ViT and DeiT respectively. These results indicate that aspects of vision transformers other than attention, such as the patch embedding, may be more responsible for their strong performance than previously thought. We hope these results prompt the community to spend more time trying to understand why our current models are as effective as they are.

本文提出了一种attention layer free的全MLP的transformer架构。

Can Vision Transformers Learn without Natural Images?

Can we complete pre-training of Vision Transformers (ViT) without natural images and human-annotated labels? Although a pre-trained ViT seems to heavily rely on a large-scale dataset and human-annotated labels, recent large-scale datasets contain several problems in terms of privacy violations, inadequate fairness protection, and labor-intensive annotation. In the present paper, we pre-train ViT without any image collections and annotation labor. We experimentally verify that our proposed framework partially outperforms sophisticated Self-Supervised Learning (SSL) methods like SimCLRv2 and MoCov2 without using any natural images in the pre-training phase. Moreover, although the ViT pre-trained without natural images produces some different visualizations from ImageNet pre-trained ViT, it can interpret natural image datasets to a large extent. For example, the performance rates on the CIFAR-10 dataset are as follows: our proposal 97.6 vs. SimCLRv2 97.4 vs. ImageNet 98.0.

https://arxiv.org/abs/2103.13023

我们可以在没有自然图像和人工标注的情况下完成视觉Transformers的训练吗?尽管一个ViT的预训练似乎非常依赖与大规模数据集和人工标注,可是最近的大规模数据集都有一些隐私侵犯,不公正的保护以及密集劳动力的标注等问题。在本文中,我们在没有大规模标注数据的介入下训练ViT。我们验证我们的网络部分优于一些自监督学习方法在没有自然图像参与预训练的情况下。另外,尽管我们的网络没有自然图像参与预训练,但是它可以拥有更多样的可视化结果相较于ImageNet上训练的ViT,这说明我们的模型可以处理自然图像。

DeepViT: Towards Deeper Vision Transformer

Vision transformers (ViTs) have been successfully applied in image classification tasks recently. In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper. More specifically, we empirically observe that such scaling difficulty is caused by the attention collapse issue: as the transformer goes deeper, the attention maps gradually become similar and even much the same after certain layers. In other words, the feature maps tend to be identical in the top layers of deep ViT models. This fact demonstrates that in deeper layers of ViTs, the self-attention mechanism fails to learn effective concepts for representation learning and hinders the model from getting expected performance gain. Based on above observation, we propose a simple yet effective method, named Re-attention, to re-generate the attention maps to increase their diversity at different layers with negligible computation and memory cost. The pro-posed method makes it feasible to train deeper ViT models with consistent performance improvements via minor modification to existing ViT models. Notably, when training a deep ViT model with 32 transformer blocks, the Top-1 classification accuracy can be improved by 1.6% on ImageNet.

https://arxiv.org/abs/2103.11886

最近,视觉transformers (ViTs) 已经被成功地运用在图像分类任务中。在本文中,我们发现,ViTs的性能不像CNNs一样可以通过堆叠更多的卷积层实现提升,而是随着深度的提升而变得笨重而低效。我们观察到这样的问题是由注意力塌陷导致的:当transformers的层数增加时,经过特定层之后的注意力图逐渐趋向于相似甚至相同。换句话来说,ViTs在顶层的特征图趋向于一致。这个发现说明了对于更深的ViTs,自注意力机制无法为表示学习获得有效的特征,自然也无法获得额外的性能提升。根据我们的发现,我们提出了一种简单但有效的方法,称为Re-attention. 它可以在不同的层恢复注意力图的多样性同时只消耗少量的算力和资源。我们提出的方法为训练更深的ViT模型并且同时保持性能提供了可能。尤其是我们基于32个Transformer块的模型在ImageNet上获得1.6%Top-1精确度的提升。

Transformer in Transformer

Transformer is a type of self-attention-based neural networks originally applied for NLP tasks. Recently, pure transformer-based models are proposed to solve computer vision problems. These visual transformers usually view an image as a sequence of patches while they ignore the intrinsic structure information inside each patch. In this paper, we propose a novel Transformer-iN-Transformer (TNT) model for modeling both patch-level and pixel-level representation. In each TNT block, an outer transformer block is utilized to process patch embeddings, and an inner transformer block extracts local features from pixel embeddings. The pixel-level feature is projected to the space of patch embedding by a linear transformation layer and then added into the patch. By stacking the TNT blocks, we build the TNT model for image recognition. Experiments on ImageNet benchmark and downstream tasks demonstrate the superiority and efficiency of the proposed TNT architecture. For example, our TNT achieves 81.3% top-1 accuracy on ImageNet which is 1.5% higher than that of DeiT with similar computational cost.

https://arxiv.org/abs/2103.00112

Transformer是一种基于自注意力的用于NLP任务的神经网络架构。最近,一些完全基于transformer的模型被提出来解决机器视觉问题。这些模型往往都将一张图片视为一系列的图像patches而忽略了patch之间的固有结构信息。在本文中,我们提出来一种叫TNT的架构,它可以为patch层级以及像素层级进行建模。在每个TNT模块中,有一个外部transformer模块被用于处理patch嵌入。像素级的特征会被线性变换层映射到patch嵌入的空间然后加入patch中。通过堆叠TNT模块,我们可以构成用于图像识别的TNT模型。在ImageNet和下游任务中都证明了TNT的优势。

Free Lunch for Few-shot Learning: Distribution Calibration

Learning from a limited number of samples is challenging since the learned model can easily become overfitted based on the biased distribution formed by only a few training examples. In this paper, we calibrate the distribution of these few-sample classes by transferring statistics from the classes with sufficient examples, then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. We assume every dimension in the feature representation follows a Gaussian distribution so that the mean and the variance of the distribution can borrow from that of similar classes whose statistics are better estimated with an adequate number of samples. Our method can be built on top of off-the-shelf pretrained feature extractors and classification models without extra parameters. We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on two datasets (~5% improvement on miniImageNet compared to the next best). The visualization of these generated features demonstrates that our calibrated distribution is an accurate estimation.

https://arxiv.org/abs/2101.06395

少样本学习一直是一个具有挑战性的任务,因为在少量训练样本上训练出来的模型容易过拟合到偏移的分布上。在本文中,我们通过充足的数据集上迁移统计信息用于校正少样本数据集上的偏移分布,然后可以从校正后的分布中采样足够多的样本用于训练。我们假设特征表示的每一维都服从高斯分布,那么这就意味着我们可以借鉴由大量数据统计的相似类的均值和方差。我们的方法可以与现成的预训练特征提取器和分类器在顶层进行合作并不需要引入额外参数。实验结果展示了在一个简单的逻辑回归分类器上使用经过校正的特征进行训练后可以在miniImageNet数据集上获得5%的性能提升。

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing methods dedicate to developing more sophisticated attention modules for achieving better performance, which inevitably increase model complexity. To overcome the paradox of performance and complexity trade-off, this paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain. By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity. Therefore, we propose a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via 1D convolution. Furthermore, we develop a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction. The proposed ECA module is efficient yet effective, e.g., the parameters and computations of our modules against backbone of ResNet50 are 80 vs. 24.37M and 4.7e-4 GFLOPs vs. 3.86 GFLOPs, respectively, and the performance boost is more than 2% in terms of Top-1 accuracy. We extensively evaluate our ECA module on image classification, object detection and instance segmentation with backbones of ResNets and MobileNetV2. The experimental results show our module is more efficient while performing favorably against its counterparts.

https://arxiv.org/abs/1910.03151

近来,通道注意力机制已经被证明可以极大的提升深度卷积神经网络的性能。但是现有的方法为了获得更好的性能往往采用更为复杂的注意力模型因此反而增加了模型的复杂度。为了解决性能和计算复杂度的平衡,本文提出了高效通道注意力模块 (ECA) 使得使用较少的参数而可以获得明显的性能提升。通过研究SENet的通道注意力模型,我们从经验上推论减少维度对于学习通道注意力至关重要,适当的通道间交互可以在保持性能的同时极大地提高模型的复杂度。因此我们提出了一种通过一维卷积实现的本地跨通道交互策略而不需要减少维度。另外,我们还提出一种自适应选择一维卷积核尺寸的方法以衡量通道之间的交互的程度。ECA模块相较于基本的RseNet50有较大提升并可以用于多种如图像识别,目标检测以及实例分割等以ResNet和MobileNetV2为框架的任务中。

Training data-efficient image transformers & distillation through attention

Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption by the larger community. In this work, with an adequate training scheme, we produce a competitive convolution-free transformer by training on Imagenet only. We train it on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. We share our code and models to accelerate community advances on this line of research. Additionally, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 84.4% accuracy) and when transferring to other tasks. 

https://arxiv.org/pdf/2012.12877v1.pdf

近来,基于注意力机制的神经网络被广泛应用在解决例如图像分类等图像理解任务上。但是这些ViT方法都在数以百万计的图片以及昂贵的设备上进行的预训练的,这限制了他们被社区广泛地采用。在本文中,我们使用一个恰当的训练流程对一个非卷积的Transformer仅仅在ImageNet数据集上进行训练。我们在一台计算机上训练3天即可完成训练流程。我们提出的模型(86M参数)达到83.1%的top-1性能在ImageNet上。我们共享代码和模型以加速社区的研究。另外,我们介绍了一个教师-学生的Transformer策略。这个策略使用蒸馏token来暴增学生可以以注意力机制的方式从教师处进行学习。我们展示了基于token的蒸馏,尤其以卷积网络的形式可以应用在教师模型上。以上的模型使得我们取得对于其他卷积网络在ImageNet数据集(我们的结果是84.4%)上的领先,这样的结果更加可以在迁移到其他任务上得到验证。

RandAugment: Practical automated data augmentation with a reduced search space

Recent work has shown that data augmentation has the potential to significantly improve the generalization of deep learning models. Recently, automated augmentation strategies have led to state-of-the-art results in image classification and object detection. While these strategies were optimized for improving validation accuracy, they also led to state-of-the-art results in semi-supervised learning and improved robustness to common corruptions of images. An obstacle to a large-scale adoption of these methods is a separate search phase which increases the training complexity and may substantially increase the computational cost. Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size. Automated augmentation policies are often found by training small models on small datasets and subsequently applied to train larger models. In this work, we remove both of these obstacles. RandAugment has a significantly reduced search space which allows it to be trained on the target task with no need for a separate proxy task. Furthermore, due to the parameterization, the regularization strength may be tailored to different model and dataset sizes. RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous automated augmentation approaches on CIFAR-10/100, SVHN, and ImageNet. On the ImageNet dataset we achieve 85.0% accuracy, a 0.6% increase over the previous state-of-the-art and 1.0% increase over baseline augmentation. On object detection, RandAugment leads to 1.0-1.3% improvement over baseline augmentation, and is within 0.3% mAP of AutoAugment on COCO. Finally, due to its interpretable hyperparameter, RandAugment may be used to investigate the role of data augmentation with varying model and dataset size. Code is available online.

https://arxiv.org/abs/1909.13719

最近的研究表明数据扩增可以极大地提高深度学习模型的泛化能力。近来自动数据扩增策略为图像分类和目标检测任务带来了客观的提升。尽管这些策略是为了提高验证精度而优化的,它们在半监督学习和面对污染图像数据时也拥有较高的鲁棒性。大规模采用这些策略的障碍是分离的搜索组会提高训练复杂度和计算难度。另外,因为分离的搜索组,这些方法很难将正则化适配于每一个组。自动数据扩增方法往往在较小地数据上训练较小地模型然后将其应用于更大的模型训练中。在本文中,我们提出了RandAugment用于解决以上问题。我们的方法极大地减小了搜索空间,这使得它可以在目标任务上直接训练而不需要分离到子任务中去。因为参数化,正则化强度可以为不同的模型和数据集尺寸量身定做。我们的方法可以在不同的任务和数据集上达成统一,并且达到或超越已有的自动数据扩增方法。

Learning Transferable Architectures for Scalable Image Recognition

Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (the “NASNet search space”) which enables transferability. In our experiments, we search for the best convolutional layer (or “cell”) on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named “NASNet architecture”. We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS – a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.

https://arxiv.org/pdf/1707.07012.pdf

针对图像识别任务的神经网络设计工作往往费时费力。在本文中,我们提出一种可以直接从数据集学习特定模型结构的方法。因为在面对大型数据集的时候本方法比较笨重,我们提出了一种方法:在较小的数据集上搜索神经网络结构然后将这个结构传递至较大的数据集上。本文关键的贡献在于设计了一个新型的搜索空间(NASNet搜索空间)用于传递网络结构。在我们的试验中,我们在CIFAR-10数据集上搜索最佳的卷积层然后将这些层堆叠起来应用在ImageNet数据集上,这些层我们称为NASNet架构。我们还介绍了一种新正则化的方法叫ScheduledDropPath,它可以显著提高NASNet的泛化能力。