Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-2.2
Paper Title SELF-GUIDED ADVERSARIAL LEARNING FOR DOMAIN ADAPTIVE SEMANTIC SEGMENTATION
Authors Yu-Ting Pang, Jui Chang, Chiou-Ting Hsu, National Tsing Hua University, Taiwan
SessionARS-2: Image and Video Segmentation
LocationArea I
Session Time:Monday, 20 September, 15:30 - 17:00
Presentation Time:Monday, 20 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Unsupervised domain adaptation has been introduced to generalize semantic segmentation models from labeled synthetic images to unlabeled real-world images. Although much effort was devoted to minimize the cross-domain gap, the segmentation results on real-world data remain highly unstable. In this paper, we discuss two main issues which hinder previous methods from achieving satisfactory results and propose a novel self-guided adversarial learning to leverage the capability of domain adaptation. Firstly, to deal with the unpredictable data variation in the real-world domain, we develop a self-guided adversarial learning method by selecting reliable target pixels as guidance to lead the adaptation of the other pixels. Secondly, to address the class-imbalanced issue, we devise the selection strategy in each class independently and incorporate this idea with a class-level adversarial learning in a unified framework. Experimental results show that the proposed method significantly improves the previous methods on several benchmark datasets.