Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIMT-1.8
Paper Title DSRN: AN EFFICIENT DEEP NETWORK FOR IMAGE RELIGHTING
Authors Sourya Dipta Das, Jadavpur University, India; Nisarg Shah, Indian Institute of Technology Jodhpur, India; Saikat Dutta, Indian Institute of Technology Madras, India; Himanshu Kumar, Indian Institute of Technology Jodhpur, India
SessionIMT-1: Computational Imaging Learning-based Models
LocationArea J
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Computational Imaging Methods and Models: Learning-Based Models
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Custom and natural lighting conditions can be emulated in images of the scene during post-editing. Extraordinary capabilities of the deep learning framework can be utilized for such purpose. Deep image relighting allows automatic photo enhancement by illumination-specific retouching. Most of the state-of-the-art methods for relighting are run-time intensive and memory inefficient. In this paper, we propose an efficient, real-time framework Deep Stacked Relighting Network (DSRN) for image relighting by utilizing the aggregated features from input image at different scales. Our model is very lightweight with total size of about $42$ MB and has an average inference time of about 0.0116s for image of resolution 1024 x 1024 which is faster as compared to other multi-scale models. Proposed method is quite robust for translating image color-temperature from input image to target image. The method also performs moderately for light gradient generation with respect to the target image. Additionally, we demonstrate that the results further improve when images illuminated from opposite directions are utilized as input.