Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-1.1
Paper Title Multi-Scale Feature Guided Low-Light Image Enhancement
Authors Lanqing Guo, Renjie Wan, Nanyang Technological University, Singapore; Guan-Ming Su, Dolby Laboratories, United States; Alex C. Kot, Bihan Wen, Nanyang Technological University, Singapore
SessionMLR-APPL-IP-1: Machine learning for image processing 1
LocationArea E
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Low-light image enhancement aims at enlarging the intensity of image pixels to better match human perception, and to improve the performance of subsequent vision tasks. While it is relatively easy to enlighten a globally low-light image, the lighting condition of realistic scenes is usually non-uniform and complex, e.g., some images may contain both bright and extremely dark regions, with or without rich features and information. Existing methods often generate abnormal light-enhancement results with over-exposure artifacts without proper guidance. To tackle this challenge, we propose a multi-scale feature guided attention mechanism in the deep generator, which can effectively perform spatially-varying light enhancement. The attention map is fused by both the gray map and extracted feature map of the input image, to focus more on those dark and informative regions. Our baseline is an unsupervised generative adversarial network, which can be trained without any low/normal-light image pair. Experimental results demonstrate the superiority in visual quality and performance of subsequent object detection over state-of-the-art alternatives.