Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDTEC-2.1
Paper Title CU-Net+: Deep Fully Interpretable Network for Multi-modal Image Restoration
Authors Jingyi Xu, Xin Deng, Mai Xu, Beihang University, China; Pier Luigi Dragotti, Imperial College London, United Kingdom
SessionTEC-2: Restoration and Enhancement 2
LocationArea G
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Processing: Restoration and enhancement
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The network interpretability is critical in computer vision related tasks, especially for the tasks with multiple modalities. For multi-modal image restoration, one recent method, CU-Net, provides an interpretable network based on a multimodal convolutional sparse coding model. However, its network architecture cannot fully interpret the model. In this paper, we propose to turn the model to networks using recurrent scheme, leading to a fully interpretable network namely CU-Net+. In addition, we relax the constraint on the common and unique feature numbers in CU-Net, for making it more consistent with real condition. The effectiveness of the proposed CU-Net+ is evaluated on RGB guided depth image super-resolution and flash guided non-flash image denoising tasks. The numerical results show that CU-Net+ outperforms other interpretable or non-interpretable methods, with 0.16 RMSE and 0.66 dB PSNR improvement than CU-Net on the two tasks, respectively.