Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIFS-1.3
Paper Title A Heterogeneous Face Recognition via Part Adaptive and Relation Attention Module
Authors Rushuang Xu, MyeongAh Cho, Sangyoun Lee, Yonsei University, Republic of Korea
SessionIFS-1: Biometrics
LocationArea K
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Information Forensics and Security: Biometrics
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In the face recognition application scenario, we need to process facial images captured in various conditions, such as at night by near-infrared (NIR) surveillance cameras. The illumination difference between NIR and visible-light (VIS) images causes a domain gap, and the variations in pose and emotion also make facial matching more difficult. Since heterogeneous face recognition (HFR) has difficulties in domain discrepancy, many studies have focused on extracting domain-invariant features, such as facial part relational information. However, when pose variation occurs, the facial component position changes and a different part relation is extracted. In this paper, we propose a part relation attention module that crops facial parts obtained through a semantic mask and performs relational modeling using each of these representative features. Furthermore, we suggest component adaptive triplet loss using adaptive weights for each part to reduce the intra-class distance regardless of the domain as well as pose. Finally, our method exhibits a performance improvement in the CASIA NIR-VIS 2.0 and achieves superior results in the BUAA-VisNir with large pose and emotion variations.