Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-AVV.5
Paper Title Light-Weight Mixed Stage Partial Network for Surveillance Object Detection with Background Data Augmentation
Authors Ping-Yang Chen, Jun-Wei Hsieh, National Yang Ming Chiao Tung University, Taiwan; Munkhjargal Gochoo, United Arab Emirates University, United Arab Emirates; Yong-Sheng Chen, National Yang Ming Chiao Tung University, Taiwan
SessionSS-AVV: Special Session: Autonomous Vehicle Vision
LocationArea A
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Special Sessions: Autonomous Vehicle Vision
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract State-of-the-art (SoTA) models have improved object detection accuracy with a large margin via convolutional neural networks, however still with an inferior performance for small objects. Moreover, these models are trained mainly based on the COCO dataset, and its backgrounds are more complicated than road environments, and thus degrade the accuracy of small road object detection. Compared with the COCO dataset, the background of a surveillance video is relatively stable and can be used to enhance the accuracy of road object detection. This paper designs a computationally efficient mixed stage partial (MSP) network to detect road objects. Another novelty of this paper is to propose a mixed background data augmentation method to enhance the detection accuracy without adding new labelling efforts. During inference, only the input image is used to detect road objects without further using any subtraction information. Extensive experiments on KITTI and UA-DETRAC benchmarks show the proposed method achieved the SoTA results for highly-accurate and efficient road object detection.