标签归档:Self-Attention

LambdaNetworks: Modeling Long-Range Interactions Without Attention

We present lambda layers — an alternative framework to self-attention — for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and COCO instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 – 4.4x faster than the popular EfficientNets on modern machine learning accelerators. When training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints.

https://arxiv.org/abs/2102.08602

在本文中我们提出lambda层,这是一种可以替代自注意力机制的架构,这个架构可以捕捉长距离的输入与结构化上下文信息之间的相互作用(例如一个像素与周围像素)。Lambda层通过将可用的上下文信息进行线性变换,这种变换成为lambdas,我们将这样的线性变换作用于每一个独立输入上。与线性注意力相似,lambda层不需要计算复杂的注意力图,相反它仅仅对内容和基于位置的相互关系进行建模从而使得它可以适应于大规模的结构性输入例如图像。我们将lambda层构建成一个LambdaNetworks,这个网络在目标检测,图像识别和实例分割上都有出色的效果。

Global Self-Attention Networks.

Recently, a series of works in computer vision have shown promising results on
various image and video understanding tasks using self-attention. However, due
to the quadratic computational and memory complexities of self-attention, these
works either apply attention only to low-resolution feature maps in later stages of
a deep network or restrict the receptive field of attention in each layer to a small
local region. To overcome these limitations, this work introduces a new global
self-attention module, referred to as the GSA module, which is efficient enough
to serve as the backbone component of a deep network. This module consists of
two parallel layers: a content attention layer that attends to pixels based only on
their content and a positional attention layer that attends to pixels based on their
spatial locations. The output of this module is the sum of the outputs of the two
layers. Based on the proposed GSA module, we introduce new standalone global
attention-based deep networks that use GSA modules instead of convolutions to
model pixel interactions. Due to the global extent of the proposed GSA module,
a GSA network has the ability to model long-range pixel interactions throughout
the network. Our experimental results show that GSA networks outperform the
corresponding convolution-based networks significantly on the CIFAR-100 and
ImageNet datasets while using less parameters and computations. The proposed
GSA networks also outperform various existing attention-based networks on the
ImageNet dataset.

由于计算量以及自注意的复杂性,目前大多计算机视觉技术仅仅将自注意应用于低分辨率特征图或者限制每层注意的感受野至一个比较小的区域。为了克服这些显示,本文提出了一种新颖全局自注意模块,命名为GSA模块,作为深度网路偶的骨干内容是足够有效的。这个模块由两个并列的层组成:一个关注内容的内容注意层以及一个关注空间位置的位置注意层。这个模块的输出是这两个层的和。基于提出的GSA模块,我们提出了新的基于全局注意的深度网络,这个网络利用GSA模块而不是卷积去建模像素的相互影响。由于提出的GSA模块的全局内容,一个GSA网络有能力去建模较长范围内的像素相关性。

论文地址:https://openreview.net/pdf?id=KiFeuZu24k

代码地址:https://github.com/lucidrains/global-self-attention-network

Stand-Alone Self-Attention in Vision Models

Stand-Alone Self-Attention in Vision Models review - Jaehwi's ML Log

Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention applied to ResNet model produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a pure self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner’s toolbox.

论文提出stand-alone self-attention layer,并且构建了full attention model,验证了content-based的相互关系能够作为视觉模型特征提取的主要基底。在图像分类和目标检测实验中,相对于传统的卷积模型,在准确率差不多的情况下,能够大幅减少参数量和计算量,论文的工作有很大的参考意义。

目前卷积网络的设计是提高图像任务性能的关键,而卷积操作由于平移不变性使其成为了图像分析的主力。受限于感受域的大小设定,卷积很难获取长距离的像素关系,而在序列模型中,已经能很好地用attention来解决这个问题。目前,attention模块已经开始应用于传统卷积网络中,比如channel-based的attention机制 Squeeze-Excite和spatially-aware的attention机制Non-local Network等。这些工作都是将global attention layers作为插件加入到目前的卷积模块中,这种全局形式考虑输入的所有空间位置,当输入很小时,由于网络需要进行大幅下采样,通常特征加强效果不好。
因此,论文提出简单的local self-attention layer,将内容之间的关系(content-based interactions)作为主要特征提取工具而不是卷积的增强工具,能够同时处理大小输入,另外也使用这个stand-alone attention layer来构建全attention的视觉模型,在图像分类和目标定位上的性能比全卷积的baseline要好。

论文地址:https://arxiv.org/pdf/1906.05909.pdf