Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision

Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years. Its success also implies drastic changes in cross-modal tasks with language and vision, and many researchers have already tackled the issue. In this paper, we review some of the most critical milestones in the field, as well as overall trends on how transformer architecture has been incorporated into visuolinguistic cross-modal tasks. Furthermore, we discuss its current limitations and speculate upon some of the prospects that we find imminent.

https://arxiv.org/abs/2103.04037

Transformer架构为计算语言领域带来了根本性的变化,这改变了循环神经网络长期占据的局面。它的成功揭示了视觉和语言跨模态任务正在发生显著的改变,很多科研人员已经投入并正在解决上述问题。本文中我们回顾了一些这个领域的里程碑以及transformer在跨模态任务中的进化趋势。然后我们还讨论了transormer架构目前的缺陷以及对未来的展望。

发表评论

邮箱地址不会被公开。 必填项已用*标注