Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSMR-3.1
Paper Title SELF-SUPERVISED DISENTANGLED EMBEDDING FOR ROBUST IMAGE CLASSIFICATION
Authors Lanqing Liu, Zhenyu Duan, Guozheng Xu, Yi Xu, Shanghai Jiao Tong University, China
SessionSMR-3: Image and Video Representation
LocationArea F
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Sensing, Modeling, and Representation: Image & video representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Recently, the security of deep learning algorithms against adversarial samples has been widely recognized. Most of the existing defense methods only consider the attack influence on image level, while the effect of correlation among feature components has not been investigated. In fact, when one feature component is successfully attacked, its correlated components can be attacked with higher probability. In this paper, a self-supervised disentanglement based defense framework is proposed, providing a general tool to disentangle features by greatly reducing correlation among feature components, thus significantly improving the robustness of the classification network. The proposed framework reveals the important role of disentangled embedding in defending adversarial samples. Extensive experiments on several benchmark datasets validate that the proposed defense framework consistently presents its robustness against extensive adversarial attacks. Also, the proposed model can be applied to any typical defense method as a good promotion strategy.