Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDTEC-4.3
Paper Title SINGLE IMAGE SUPER-RESOLUTION VIA GLOBAL-CONTEXT ATTENTION NETWORKS
Authors Pengcheng Bian, Zhonglong Zheng, Dawei Zhang, Liyuan Chen, Minglu Li, Zhejiang Normal University, China
SessionTEC-4: Super-resolution
LocationArea G
Session Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Poster
Topic Image and Video Processing: Interpolation, super-resolution, and mosaicing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In the last few years, single image super-resolution (SISR) has benefited a lot from the rapid development of deep convolutional neural networks (CNNs), and the introduction of attention mechanisms further improves the performance of SISR. However, previous methods use one or more types of attention independently in multiple stages and ignore the correlations between different layers in the network. To address these issues, we propose a novel end-to-end architecture named global-context attention network (GCAN) for SISR, which consists of several residual global-context attention blocks (RGCABs) and an inter-group fusion module (IGFM). Specifically, the proposed RGCAB extracts representative features that capture non-local spatial interdependencies and multiple channel relations. Then the IGFM aggregates and fuses hierarchical features of multi-layers discriminatively by considering correlations among layers. Extensive experimental results demonstrate that our method achieves superior results against other state-of-the-art methods on publicly available datasets.