分类目录归档:Daily Paper Review

Do Transformers Really Perform Bad for Graph Representation?

The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.

https://arxiv.org/abs/2106.05234

Transformer架构已经成为自然语言处理和机器视觉等任务的主要模型之一。然而,它依然没有在图级的排行榜上展现出有竞争力的性能。所以,Transformer在图表示学习领域依然有需要探索的空间。在本文中,我们提出了Graphormer,这是一种基于标准Transformer的架构,可以实现在图表示学习任务上优秀的性能。模型的关键是如何有效的将图信息编码进架构中。最后,我们还提出了几个简单但有效的编码架构用于帮助Graphormer更好得建模图架构数据。另外,我们从数学角度分析了Graphormer架构的表达能力以及我们编码方式,许多受欢迎的GNN架构都可以转换成特殊形式的Graphormer.

Not All Images are Worth 16×16 Words: Dynamic Vision Transformers with Adaptive Sequence Length

Vision Transformers (ViT) have achieved remarkable success in large-scale image recognition. They split every 2D image into a fixed number of patches, each of which is treated as a token. Generally, representing an image with more tokens would lead to higher prediction accuracy, while it also results in drastically increased computational cost. To achieve a decent trade-off between accuracy and speed, the number of tokens is empirically set to 16×16. In this paper, we argue that every image has its own characteristics, and ideally the token number should be conditioned on each individual input. In fact, we have observed that there exist a considerable number of “easy” images which can be accurately predicted with a mere number of 4×4 tokens, while only a small fraction of “hard” ones need a finer representation. Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image. This is achieved by cascading multiple Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion at test time, i.e., the inference is terminated once a sufficiently confident prediction is produced. We further design efficient feature reuse and relationship reuse mechanisms across different components of the Dynamic Transformer to reduce redundant computations. Extensive empirical results on ImageNet, CIFAR-10, and CIFAR-100 demonstrate that our method significantly outperforms the competitive baselines in terms of both theoretical computational efficiency and practical inference speed.

https://arxiv.org/abs/2105.15075

ViT 在大规模图像识别任务中崭露头角。它主要通过将一张2D图像分解成固定数量的patch,并将其作为token。一般来说,使用更多的token将带来模型精度的提升,然而同时也会带来运算量的急剧提升。所以为了平衡效率和性能,token的数量通常会被设置成16×16. 在本文中,我们认为每一张图片都有其独特性,所以对于每一张图片应该使用不一样数量的token。事实上,我们发现大部分的图像可以表示为4×4的token就已经足够,只有少量的复杂图像需要更加多的token进行表示。基于上述发现,我们提出一种动态的Transformer架构以自主决定合适的token数量。这个架构由多个Transformers级联而成,多个Transfomer可以动态地增加输入的token数量,在有足够地token时推理过程会停止。另外,我们还设计了一个高效的特征重利用机制用于减少计算量。实验证明,我们的模型在多个公开数据集上都可以取得很好的效率和分类性能。

High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network

Existing image-to-image translation (I2IT) methods are either constrained to low-resolution images or long inference time due to their heavy computational burden on the convolution of high-resolution feature maps. In this paper, we focus on speeding-up the high-resolution photorealistic I2IT tasks based on closed-form Laplacian pyramid decomposition and reconstruction. Specifically, we reveal that the attribute transformations, such as illumination and color manipulation, relate more to the low-frequency component, while the content details can be adaptively refined on high-frequency components. We consequently propose a Laplacian Pyramid Translation Network (LPTN) to simultaneously perform these two tasks, where we design a lightweight network for translating the low-frequency component with reduced resolution and a progressive masking strategy to efficiently refine the high-frequency ones. Our model avoids most of the heavy computation consumed by processing high-resolution feature maps and faithfully preserves the image details. Extensive experimental results on various tasks demonstrate that the proposed method can translate 4K images in real-time using one normal GPU while achieving comparable transformation performance against existing methods. 

https://arxiv.org/abs/2105.09188

现有的I2IT的方法被低分辨率图像和冗长的推理时间困扰。在本文中,我们通过闭合拉普拉斯金字塔进行分解和重建以完成高分辨图像的I2IT任务。我们发现光照和色彩变化更多的与图像的低频部分相关,而图像的内容与其高频部分相关。我们在这里提出一种拉普拉斯金字塔变换网络(LPTN), 这个轻量化的网络可以用低分辨率的形式转换低频特征并用一种渐进式的掩膜方式转换高频特征。我们的模型避免的大部分的复杂计算同时保持了尽量多的图像细节。在实验中,我们的模型可以实现实时4k分辨率的图像风格迁移。

Are Pre-trained Convolutions Better than Pre-trained Transformers?

In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures.

https://arxiv.org/abs/2105.03322

Transformers在自然语言处理任务中已经是一个当然的选择。然而,在作为预训练模型使用的时候的研究还不完善。在获得充分预训练的情况下,卷积神经网络可以获得与Transformer相似的性能吗?本文研究了在八个数据集和任务上CNNs和Transformers的性能比较,我们发现在某些情况下基于CNN的预训练模型可以优于基于Transformer的预训练模型。

MLP-Mixer: An all-MLP Architecture for Vision

The strong performance of vision transformers on image classification and other vision tasks is often attributed to the design of their multi-head attention layers. However, the extent to which attention is responsible for this strong performance remains unclear. In this short report, we ask: is the attention layer even necessary? Specifically, we replace the attention layer in a vision transformer with a feed-forward layer applied over the patch dimension. The resulting architecture is simply a series of feed-forward layers applied over the patch and feature dimensions in an alternating fashion. In experiments on ImageNet, this architecture performs surprisingly well: a ViT/DeiT-base-sized model obtains 74.9% top-1 accuracy, compared to 77.9% and 79.9% for ViT and DeiT respectively. These results indicate that aspects of vision transformers other than attention, such as the patch embedding, may be more responsible for their strong performance than previously thought. We hope these results prompt the community to spend more time trying to understand why our current models are as effective as they are.

本文提出了一种attention layer free的全MLP的transformer架构。

On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation

We investigate the sensitivity of the Fréchet Inception Distance (FID) score to inconsistent and often incorrect implementations across different image processing libraries. FID score is widely used to evaluate generative models, but each FID implementation uses a different low-level image processing process. Image resizing functions in commonly-used deep learning libraries often introduce aliasing artifacts. We observe that numerous subtle choices need to be made for FID calculation and a lack of consistencies in these choices can lead to vastly different FID scores. In particular, we show that the following choices are significant: (1) selecting what image resizing library to use, (2) choosing what interpolation kernel to use, (3) what encoding to use when representing images. We additionally outline numerous common pitfalls that should be avoided and provide recommendations for computing the FID score accurately. We provide an easy-to-use optimized implementation of our proposed recommendations in the accompanying code.

https://arxiv.org/abs/2104.11222

我们发现FID的敏感度会因为在不同图像处理库下开发出现偏差。虽然FID是一个被广泛使用的标准用于评价生成模型,但是它在不同的库中使用不同的方式开发的。我们观察到图像缩放操作在深度学习应用中会引入混淆失真。这就说明我们需要为FID的计算提供多个选择以防止上述缩放操作引入的失真,(1)选择使用哪种库进行图像缩放;(2)选择使用哪种插值核进行缩放;(3)选择使用哪种编码方式保存图像。

Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales

The past decade has witnessed a groundbreaking rise of machine learning for human language analysis, with current methods capable of automatically accurately recovering various aspects of syntax and semantics – including sentence structure and grounded word meaning – from large data collections. Recent research showed the promise of such tools for analyzing acoustic communication in nonhuman species. We posit that machine learning will be the cornerstone of future collection, processing, and analysis of multimodal streams of data in animal communication studies, including bioacoustic, behavioral, biological, and environmental data. Cetaceans are unique non-human model species as they possess sophisticated acoustic communications, but utilize a very different encoding system that evolved in an aquatic rather than terrestrial medium. Sperm whales, in particular, with their highly-developed neuroanatomical features, cognitive abilities, social structures, and discrete click-based encoding make for an excellent starting point for advanced machine learning tools that can be applied to other animals in the future. This paper details a roadmap toward this goal based on currently existing technology and multidisciplinary scientific community effort. We outline the key elements required for the collection and processing of massive bioacoustic data of sperm whales, detecting their basic communication units and language-like higher-level structures, and validating these models through interactive playback experiments. The technological capabilities developed by such an undertaking are likely to yield cross-applications and advancements in broader communities investigating non-human communication and animal behavioral research.

https://arxiv.org/abs/2104.08614

最近的机器学习算法可以精确地重构句法和语义,这包括从大规模数据集中提取的句子结构和词汇含义。最近的研究也表明这样的技术可以用于分析动物之间的语言交流。我们使用机器学习的算法分析抹香鲸的交流,抹香鲸拥有高度发达的神经系统,感知能力以及社交结构。这将为未来在其他生物上的研究带来参考。本文详细地展示了一个路线图,这个路线图描绘了如何收集和处理抹香鲸的生物声学信号,侦测基本的沟通单元以及语言相关的高级结构,并且在新的数据上进行验证。

DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

We introduce DatasetGAN: an automatic procedure to generate massive datasets of high-quality semantically segmented images requiring minimal human effort. Current deep networks are extremely data-hungry, benefiting from training on large-scale datasets, which are time consuming to annotate. Our method relies on the power of recent GANs to generate realistic images. We show how the GAN latent code can be decoded to produce a semantic segmentation of the image. Training the decoder only needs a few labeled examples to generalize to the rest of the latent space, resulting in an infinite annotated dataset generator! These generated datasets can then be used for training any computer vision architecture just as real datasets are. As only a few images need to be manually segmented, it becomes possible to annotate images in extreme detail and generate datasets with rich object and part segmentations. To showcase the power of our approach, we generated datasets for 7 image segmentation tasks which include pixel-level labels for 34 human face parts, and 32 car parts. Our approach outperforms all semi-supervised baselines significantly and is on par with fully supervised methods, which in some cases require as much as 100x more annotated data as our method.

https://arxiv.org/abs/2104.06490

在本文中,我们提出了一种可以为语义分割任务生成大量数据的DatasetGAN.GAN的隐编码可以被解码为分割图,而训练解码器只需要少量标注数据即可.

Escaping the Big Data Paradigm with Compact Transformers

model-sym

With the rise of Transformers as the standard for language processing, and their advancements in computer vision, along with their unprecedented size and amounts of training data, many have come to believe that they are not suitable for small sets of data. This trend leads to great concerns, including but not limited to: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we dispel the myth that transformers are “data hungry” and therefore can only be applied to large sets of data. We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets. Our model eliminates the requirement for class token and positional embeddings through a novel sequence pooling strategy and the use of convolutions. We show that compared to CNNs, our compact transformers have fewer parameters and MACs, while obtaining similar accuracies. Our method is flexible in terms of model size, and can have as little as 0.28M parameters and achieve reasonable results. It can reach an accuracy of 94.72% when training from scratch on CIFAR-10, which is comparable with modern CNN based approaches, and a significant improvement over previous Transformer based models. Our simple and compact design democratizes transformers by making them accessible to those equipped with basic computing resources and/or dealing with important small datasets.

https://arxiv.org/abs/2104.05704

传统的ViT模型在训练的时候需要大量的数据,为了解决这个问题,我们在本文中提出CCT架构,这个架构可以以少量数据参与训练达到与CNNs匹配的性能。我们的模型通过一种新的序列池化策略以摆脱对class token以及位置嵌入的依赖。实验结果表明,我们的模型可以以更少的参数和更快的推理速度实验与SOTA模型相似的性能。

InfinityGAN: Towards Infinite-Resolution Image Synthesis

We present InfinityGAN, a method to generate arbitrary-resolution images. The problem is associated with several key challenges. First, scaling existing models to a high resolution is resource-constrained, both in terms of computation and availability of high-resolution training data. Infinity-GAN trains and infers patch-by-patch seamlessly with low computational resources. Second, large images should be locally and globally consistent, avoid repetitive patterns, and look realistic. To address these, InfinityGAN takes global appearance, local structure and texture into account.With this formulation, we can generate images with resolution and level of detail not attainable before. Experimental evaluation supports that InfinityGAN generates imageswith superior global structure compared to baselines at the same time featuring parallelizable inference. Finally, we how several applications unlocked by our approach, such as fusing styles spatially, multi-modal outpainting and image inbetweening at arbitrary input and output resolutions

https://arxiv.org/abs/2104.03963

任意分辨率图像生成任务有以下几个挑战:(1)高分辨率的图像生成要求高的资源消耗;(2)高分辨的图像各个部分应该保持一致,尽量避免重复的特征,并且要看起来真实。为了解决上述问题,本文提出InfinityGAN,一种可以生成任意分辨率图像的方法。我们的方法同时考虑全局外观、局部解构和纹理。因此我们可以生成之前方法无法生成的高分辨图像。