Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-3.3
Paper Title MULTI-SCALE GRAPH CONVOLUTIONAL INTERACTION NETWORK FOR SALIENT OBJECT DETECTION
Authors Wenqi Che, Luoyi Sun, Zhifeng Xie, Youdong Ding, Kaili Han, Shanghai University, China
SessionMLR-APPL-IP-3: Machine learning for image processing 3
LocationArea F
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Remarkable progress has been achieved for salient object detection based on deep learning. However, most of the previous works have the issues of how to extract more effective information from scale-varying data and how to improve the boundary quality. In this paper, we propose the multi-scale graph convolutional interaction network (MGCINet), which consists of the feature interaction module (FIM), the feature aggregation module (FAM), and the residual refinement module (RRM). FIMs fuse interactive features from neighboring scales. Based on two-layers graph convolutional network, FAMs aggregate scale-specific information by graph nodes interaction. RRMs optimize the coarse saliency maps with blurred boundaries by U-net residual blocks. In addition, we propose multi-scale weighted structural loss to assign different weights to pixels while focusing on image structure at various scales. Experiments show that our method outperforms the state-of-the-arts on five benchmark datasets under different evaluation metrics.