Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-1.12
Paper Title Robust Unsupervised Multi-Object Tracking in Noisy Environments
Authors Huck Yang, Georgia Institute of Technology, United States; Mohit Chhabra, Hitachi, Japan; Yi-Chieh Liu, Georgia Institute of Technology, United States; Quan Kong, Tomoaki Yoshinaga, Tomokazu Murakami, Hitachi, Japan
SessionARS-1: Object Detection
LocationArea I
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Mid-Level Analysis
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Physical processes, camera movement, and unpredictable environmental conditions like the presence of dust can induce noise and artifacts in video feeds. We observe that popular unsupervised MOT methods are dependent on noise-free inputs. We show that the addition of a small amount of artificial random noise causes a sharp degradation in model performance on benchmark metrics. We resolve this problem by introducing a robust unsupervised multi-object tracking (MOT) model: AttU-Net. The proposed single-head attention model helps limit the negative impact of noise by learning visual representations at different segment scales. AttU-Net shows better unsupervised MOT tracking performance over variational inference-based state-of-the-art baselines. We evaluate our method in the MNIST-MOT and the Atari game video benchmark. We also provide two extended video datasets: ``Kuzushiji-MNIST MOT'' which consists of moving Japanese characters and ``Fashion-MNIST MOT'' to validate the effectiveness of the MOT models.