Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-3.8
Paper Title WEAKLY-SUPERVISED MULTIPLE OBJECT TRACKING VIA A MASKED CENTER POINT WARPING LOSS
Authors Sungjoon Yoon, Kyujin Shim, Korea Advanced Institute of Science and Technology, Republic of Korea; Kayoung Park, Agency for Defense Development, Republic of Korea; Changick Kim, Korea Advanced Institute of Science and Technology, Republic of Korea
SessionMLR-APPL-IVASR-3: Machine learning for image and video analysis, synthesis, and retrieval 3
LocationArea E
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Multiple object tracking (MOT), a popular subject in computer vision with broad application areas, aims to detect and track multiple objects across an input video. However, recent learning-based MOT methods require strong supervision on both the bounding box and the ID of each object for every frame used during training, which induces a heightened cost for obtaining labeled data. In this paper, we propose a weakly-supervised MOT framework that enables the accurate tracking of multiple objects while being trained without object ID ground truth labels. Our model is trained only with the bounding box information with a novel masked warping loss that drives the network to indirectly learn how to track objects through a video. Specifically, valid object center points in the current frame are warped with the predicted offset vector and enforced to be equal to the valid object center points in the previous frame. With this approach, we obtain an MOT accuracy on par with those of the state-of-the-art fully supervised MOT models, which use both the bounding boxes and object ID as ground truth labels, on the MOT17 dataset.