Context-aware Feature Generation for Zero-shot Semantic Segmentation

Context-aware Feature Generation for Zero-shot Semantic Segmentation |  Papers With Code

论文地址:https://arxiv.org/pdf/2008.06893.pdf

代码地址:https://github.com/bcmi/CaGNet-Zero-Shot-Semantic-Segmentation

Existing semantic segmentation models heavily rely on dense pixelwise annotations. To reduce the annotation pressure, we focus on a challenging task named zero-shot semantic segmentation, which aims to segment unseen objects with zero annotations. This task can be accomplished by transferring knowledge across categories via semantic word embeddings. In this paper, we propose a novel context-aware feature generation method for zero-shot segmentation named CaGNet. In particular, with the observation that a pixel-wise feature highly depends on its contextual information, we
insert a contextual module in a segmentation network to capture the pixel-wise contextual information, which guides the process of generating more diverse and context-aware features from semantic word embeddings. Our method achieves state-of-the-art results on three benchmark datasets for zero-shot segmentation.

现有的语义分割方法严重依赖大量的像素级的标注。为了减少标注压力,我们关注名为zero-shot语义分割的挑战任务,该任务旨在在没有标注的情况下分割从未见过的目标。此人物可以通过迁移不同类别之间的知识实现,这些知识来源于语义词embeddings。本文提出一种新颖的用于zero-shot分割的上下文感知特征生成方法,名为CaGNet。特别地,观察到一个像素集别的特征高度依赖于它的上下文信息,我们插入一个上下文模块在分割网络中用于获取像素集别的上下文信息,该模块可用于引导生成来自于语义词汇embeddings的更多样并且上下文感知特征的过程。

发表评论

邮箱地址不会被公开。 必填项已用*标注