Bottleneck Transformers for Visual Recognition

We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 2.33x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.

https://arxiv.org/abs/2101.11605

我们提出BoTNet,一种基于自注意力的简单但是高效的基本架构,它可以广泛地用于许多计算机视觉任务:包括图像分类,目标检测以及实例分割。我们仅仅在ResNet地瓶颈模块中将空间卷积替换为全局自注意力就可以在实例分割以及目标检测任务上取得显著地性能提升且减少了参数。通过BoTNet,我们还展示了如何将基于自注意地ResNet瓶颈模块视为Transformer模块。公正地说,BoTNet以Mask R-CNN作为基础模型取得了44.4% Mask AP和49.7% Box AP在COCO实例分割排行榜上…

发表评论

邮箱地址不会被公开。 必填项已用*标注