ViViT: A Video Vision Transformer

We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks. To facilitate further research, we will release code and models.

https://arxiv.org/abs/2103.15691

我们提出一种基于纯transformer的视频分类的模型,这种纯transformer的模型已经在图像分类领域取得了成功。我们的模型从输入视频中提取时空tokens,并且将其嵌入一系列transformer层中。为了处理长序列的tokens,我们提出了几种我们模型的变形用于在时间和空间域分解输入视频。尽管一般认为基于transformer的模型只有依赖大规模的训练集才能够应用,我们的模型却可以在正则化和预训练模型的帮助下在小数据集上取得匹敌大规模训练集的效果。我们在数个数据集上的测试表明我们的模型优于3D卷积网络。

发表评论

邮箱地址不会被公开。 必填项已用*标注