Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-6.6
Paper Title LET THEM CHOOSE WHAT THEY WANT: A MULTI-TASK CNN ARCHITECTURE LEVERAGING MID-LEVEL DEEP REPRESENTATIONS FOR FACE ATTRIBUTE CLASSIFICATION
Authors Zhenduo Chen, Feng Liu, Nanjing University of Posts and Telecommunications, China; Zhenglai Zhao, Jiangsu Bzisland Intelligent Technology Institute, China
SessionMLR-APPL-IP-6: Machine learning for image processing 6
LocationArea E
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Face Attributes Classification (FAC) is an important task in computer vision, aiming to predict the facial attributes of a given image. However, the value of mid-level feature information and the correlation between face attributes are always ignored by deep learning-based FAC methods. In order to solve these problems, we propose a novel and effective Multi-task CNN architecture. Instead of predicting all 40 attributes together, an attribute grouping strategy is proposed to divide the 40 attributes into 8 task groups correlatively. Meanwhile, through the Fusion Layer, mid-level deep representations are fused into the original feature representations to jointly predict the face attributes. Furthermore, the Task-unique Attention Modules can help learn more task-specific feature representations, obtaining higher FAC accuracy. Extensive experiments on the CelebA dataset demonstrate that our method outperforms state-of-the-art FAC methods.