Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-3.13
Paper Title BOTTOM-UP SALIENCY MEETS TOP-DOWN SEMANTICS FOR OBJECT DETECTION
Authors Tomoya Sawada, Teng-Yok Lee, Masahiro Mizuno, Mitsubishi Electric Corporation, Japan
SessionMLR-APPL-IP-3: Machine learning for image processing 3
LocationArea F
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract While convolution neural networks (CNNs) successfully boost the accuracy of object detection algorithms, it is still challenging to detect far-away objects since they can be tiny in an image. To enhance CNNs' ability for tiny object detection, this paper presents an algorithm called Saliency-Guided Feature Module (SGFM). Based on low-level image features, we compute the saliency over the image to indicate where foreground objects could be, and use SGFM to enhance the CNN's feature maps that focus on similar areas.We also present a new dataset named Camera Monitoring System Driving Dataset (CMS-DD) where images were captured from the view angle of side mirrors on driving vehicles so far-away objects can look even tinier. Our experiments show that SGFMs can further improve a recent state-of-the-art object detector in practical driving scenes like CMS-DD.