Vision Transformers (ViT) have achieved remarkable success in large-scale image recognition. They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead to higher prediction accuracy, while it also results in drastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16×16. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist a considerable number of “easy” images which can be accurately predicted with a mere number of 4×4 tokens, while only a small fraction of “hard” ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient feature reuse and relationship reuse mechanisms across different components of the Dynamic Transformer to reduce redundant computations. Extensive empirical results on ImageNet, CIFAR-10, and CIFAR-100 demonstrate that our method significantly outperforms the competitive baselines in terms of both theoretical computational efficiency and practical inference speed.
ViT 在大规模图像识别任务中崭露头角。它主要通过将一张2D图像分解成固定数量的patch，并将其作为token。一般来说，使用更多的token将带来模型精度的提升，然而同时也会带来运算量的急剧提升。所以为了平衡效率和性能，token的数量通常会被设置成16×16. 在本文中，我们认为每一张图片都有其独特性，所以对于每一张图片应该使用不一样数量的token。事实上，我们发现大部分的图像可以表示为4×4的token就已经足够，只有少量的复杂图像需要更加多的token进行表示。基于上述发现，我们提出一种动态的Transformer架构以自主决定合适的token数量。这个架构由多个Transformers级联而成，多个Transfomer可以动态地增加输入的token数量，在有足够地token时推理过程会停止。另外，我们还设计了一个高效的特征重利用机制用于减少计算量。实验证明，我们的模型在多个公开数据集上都可以取得很好的效率和分类性能。