Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-3.5
Paper Title TWO-PHASE FEATURE FUSION NETWORK FOR VISIBLE-INFRARED PERSON RE-IDENTIFICATION
Authors Yunzhou Cheng, Guoqiang Xiao, Xiaoqin Tang, Southwest University, China; Wenzhuo Ma, Xinye Gou, Chongqing Productivity Council, China
SessionMLR-APPL-IVASR-3: Machine learning for image and video analysis, synthesis, and retrieval 3
LocationArea E
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Visible-infrared person re-identification(VI-ReID) is a challenging problem that aims to match pedestrians captured by visible and infrared cameras. Prevailing methods in this field mainly focus on learning sharable feature representations from the last layer of deep convolution neural networks(CNNs). However, due to the large intra-modality variations and cross-modality variations, the last layer’s sharable feature representations are less discriminative. To remedy this, we propose a novel Two-Phase Feature Fusion Network(TFFN) to enhance the discriminative feature learning via feature fusion. Specifically, TFFN contains two fusion modules: (1) Multi-Level Fusion Module(MLFM) that re-weights and fuses intra-modality multi-level features to utilize high- and low-level information; (2) Graph-Level Fusion Module (GLFM) that mines and fuses graph-level rich mutual information across the two modalities to reduce the modality variations. Additionally, for effective fusion, we develop a deep supervision method to enhance the discrimination of pre-fusion features and eliminate noise information. Extensive experiments show that TFFN outperforms the state-of-the-art methods on two mainstream VI-ReID datasets: SYSU-MM01 and RegDB.