Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVSMR-3.2
Paper Title ENHANCING ADVERSARIAL ROBUSTNESS FOR IMAGE CLASSIFICATION BY REGULARIZING CLASS LEVEL FEATURE DISTRIBUTION
Authors Cheng Yu, Youze Xue, Jiansheng Chen, Yu Wang, Tsinghua University, China; Huimin Ma, University of Science and Technology Beijing, China
SessionMLR-APPL-IVSMR-3: Machine learning for image and video sensing, modeling and representation 3
LocationArea D
Session Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video sensing, modeling, and representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Recent researches have shown that deep neural networks (DNNs) are vulnerable to adversarial examples. Adversarial training is practically the most effective approach to improve the robustness of DNNs against adversarial examples. However, conventional adversarial training methods only focus on the classification results or the instance level relationship on feature representations for adversarial examples. Inspired by the fact that adversarial examples break the distinguishability of the feature representations of DNNs for different classes, we propose Intra and Inter Class Feature Regularization (I2FR) to make the feature distribution of adversarial examples maintain the same classification property as clean examples. On the one hand, the intra-class regularization restricts the distance of features between adversarial examples and both the corresponding clean data and samples for the same class. On the other hand, the inter-class regularization prevents the feature of adversarial examples from getting close to other classes. By adding I2FR in both adversarial example generation and model training steps in adversarial training, we can get stronger and more diverse adversarial examples, and the neural network learns a more distinguishable and reasonable feature distribution. Experiments on various adversarial training frameworks demonstrate that I2FR is adaptive for multiple training frameworks and outperforms the state-of-the-art methods for classification of both clean data and adversarial examples.