Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-7.2
Paper Title DESCRIBE ME IF YOU CAN! CHARACTERIZED INSTANCE-LEVEL HUMAN PARSING
Authors Angelique Loesch, Romaric Audigier, Commissariat à l'énergie atomique et aux énergies alternatives, France
SessionARS-7: Image and Video Interpretation and Understanding 2
LocationArea H
Session Time:Wednesday, 22 September, 08:00 - 09:30
Presentation Time:Wednesday, 22 September, 08:00 - 09:30
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Several computer vision applications such as person search or online fashion rely on human description. The use of instance-level human parsing (HP) is therefore relevant since it localizes semantic attributes and body parts within a person. But how to characterize these attributes? To our knowledge, only some single-HP datasets describe attributes with some color, size and/or pattern characteristics. There is a lack of dataset for multi-HP in the wild with such characteristics. In this article, we propose the dataset CCIHP based on the multi-HP dataset CIHP, with 20 new labels covering these 3 kinds of characteristics. In addition, we propose HPTR, a new bottom-up multi-task method based on transformers as a fast and scalable baseline. It is the fastest method of multi-HP state of the art while having precision comparable to the most precise bottom-up method. We hope this will encourage research for fast and accurate methods of precise human descriptions.