Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-5.11
Paper Title GI-AEE: GAN INVERSION BASED ATTENTIVE EXPRESSION EMBEDDING NETWORK FOR FACIAL EXPRESSION EDITING
Authors Yun Zhang, Ruixin Liu, Yifan Pan, Dehao Wu, Yuesheng Zhu, Zhiqiang Bai, Peking University, China
SessionARS-5: Image and Video Synthesis, Rendering and Visualization
LocationArea I
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Synthesis, Rendering, and Visualization
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Facial expression editing aims to modify facial expression by specific conditions. Existing methods adopt an encoder-decoder architecture under the guidance of expression condition to process the desired expression. However, these methods always tend to produce artifacts and blurs in expression-intensive regions due to simultaneously modifying images in expression changed regions and ensuring the consistency of other attributes with the source image. To address these issues, we propose a GAN inversion based Attentive Expression Embedding Network(GI-AEE) for facial expression editing, which decouples this task utilizing GAN inversion to alleviate the strong effect of the source image on the target image and produces high-quality expression editing results. Furthermore, different from existing methods that directly embed the expression condition into the network, we propose an Attentive Expression Embedding module to embed corresponding expression vectors into different facial regions, producing more plausible results. Qualitative and quantitative experiments demonstrate our method outperforms the state-of-the-art expression editing methods.