Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-3.2
Paper Title PART UNCERTAINTY ESTIMATION CONVOLUTIONAL NEURAL NETWORK FOR PERSON RE-IDENTIFICATION
Authors Wenyu Sun, Jiyang Xie, Jiayan Qiu, Zhanyu Ma, Beijing University of Posts and Telecommunications, China
SessionARS-3: Image and Video Biometric Analysis
LocationArea H
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Due to the large amount of noisy data in person re-identification (ReID) task, the ReID models are usually affected by the data uncertainty. Therefore, the deep uncertainty estimation method is important for improving the model robustness and matching accuracy. To this end, we propose a part-based uncertainty convolutional neural network (PUCNN), which introduces the part-based uncertainty estimation into the baseline model. On the one hand, PUCNN improves the model robustness to noisy data by distributilizing the feature embedding and constraining the part-based uncertainty. On the other hand, PUCNN improves the cumulative matching characteristics (CMC) performance of the model by filtering out low-quality training samples according to the estimated uncertainty score. The experiments on both non-video datasets, the noised Market-1501 and DukeMTMC, and video datasets, PRID2011, iLiDS-VID and MARS, demonstrate that our proposed method achieves encouraging and promising performance.