Are Pre-trained Convolutions Better than Pre-trained Transformers?

In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures.

https://arxiv.org/abs/2105.03322

Transformers在自然语言处理任务中已经是一个当然的选择。然而,在作为预训练模型使用的时候的研究还不完善。在获得充分预训练的情况下,卷积神经网络可以获得与Transformer相似的性能吗?本文研究了在八个数据集和任务上CNNs和Transformers的性能比较,我们发现在某些情况下基于CNN的预训练模型可以优于基于Transformer的预训练模型。

发表评论

邮箱地址不会被公开。 必填项已用*标注