STGAN: A unified selective transfer network for arbitrary image attribute editing

Arbitrary attribute editing generally can be tackled by incorporating encoder-decoder and generative adversarial networks. However, the bottleneck layer in encoder-decoder usually gives rise to blurry and low quality editing result. And adding skip connections improves image quality at the cost of weakened attribute manipulation ability. Moreover, existing methods exploit target attribute vector to guide the flexible translation to desired target domain. In this work, we suggest to address these issues from selective transfer perspective. Considering that specific editing task is certainly only related to the changed attributes instead of all target attributes, our model selectively takes the difference between target and source attribute vectors as input. Furthermore, selective transfer units are incorporated with encoder-decoder to adaptively select and modify encoder feature for enhanced attribute editing. Experiments show that our method (i.e., STGAN) simultaneously improves attribute manipulation accuracy as well as perception quality, and performs favorably against state-of-the-arts in arbitrary face attribute editing and season translation.

https://arxiv.org/pdf/1904.09709.pdf

任意属性编辑任务通常可以通过编解码器和GAN解决。但是,编解码器的瓶颈层会影响生成图像的质量且使得图像模糊。加入skip-connection往往可以提高图像的质量但是也会使得属性编辑的能力下降。另外,现有的方法使用目标属性向量去指导有弹性的朝向目标域的变换。在本文中,我们从可选择变换的角度解决以上问题。考虑到我们只想更改图片中的一个属性,我们的模型以目标和源属性向量的差异作为输入。而且,选择性变换单元还可以嵌入编解码器中自适应地为属性编辑任务选择和编辑编码器特征。

发表评论

邮箱地址不会被公开。 必填项已用*标注