Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-6.8
Paper Title HSEGAN: HAIR SYNTHESIS AND EDITING USING STRUCTURE-ADAPTIVE NORMALIZATION ON GENERATIVE ADVERSARIAL NETWORK
Authors Wanling Fan, Jiayuan Fan, Fudan University, China; Gang Yu, Bin Fu, Tencent, China; Tao Chen, Fudan University, China
SessionMLR-APPL-IVASR-6: Machine learning for image and video analysis, synthesis, and retrieval 6
LocationArea D
Session Time:Wednesday, 22 September, 08:00 - 09:30
Presentation Time:Wednesday, 22 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Human hair is a kind of special material with complex and varied high-frequency details. It is a challenging task to synthesize and edit realistic and fine-grained hair using deep learning methods. In this paper, we propose HSEGAN, a novel framework consisting of two condition modules encoding foreground hair and background respectively, followed by a hair synthesis generator that synthesizes the final result based on the encoded input. For the purpose of efficient and effective hair generation, we propose hair structure-adaptive normalization (HSAN) and use several HSAN residual blocks to build the hair synthesis generator. HSEGAN allows for explicit manipulation of hair at three different levels, including color, structure and shape. Extensive experiments on FFHQ dataset demonstrate our method can generate higher-quality hair images than state-of-the-art methods, yet consume less time in the inference stage.