GuidedStyle: Attribute Knowledge Guided Style Manipulation for Semantic Face Editing

Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there still lacks of control over the generation process in order to achieve semantic face editing. In addition, it remains very challenging to maintain other face information untouched while editing the target attributes. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache and hair color. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability. 

https://arxiv.org/pdf/2012.11856v1.pdf

虽然在非条件GAN图像在高品质图像生成领域已经取得了长足进步,现在的生成过程还存在一定的缺陷,例如语义面部编辑任务。另外,在编辑面部图像的时候如何保留非编辑区域信息依然是一个挑战。在本文中,我们提出了针对语义面部编辑任务提出一种新的网络架构,GuidedStyle. 这种架构基于StyleGAN通过一个知识网络来引导图像生成过程。而且我们利用注意力机制使得StyleGAN的生成器可以自适应地选择某一层进行风格编辑。结果表明,我们提出的方法可以利用多样地标签在面部编辑任务中,包括微笑,眼睛,性别,胡子以及发色。

发表评论

邮箱地址不会被公开。 必填项已用*标注