Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-5.4
Paper Title ADVERSARIAL TRAINING WITH STOCHASTIC WEIGHT AVERAGE
Authors Joong-Won Hwang, Youngwan Lee, Sungchan Oh, Yuseok Bae, Electronics and Telecommunications Research Institute, Republic of Korea
SessionMLR-APPL-IP-5: Machine learning for image processing 5
LocationArea E
Session Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Although adversarial training is the most reliable method to train robust deep neural networks so far, adversarially trained networks still show large gap between their accuracies on clean images and those on adversarial images. In conventional classification problem, one can gain higher accuracy by ensembling multiple networks. However, in adversarial training, there are obstacles to adopt such ensemble method. First, as inner maximization is expensive, training multiple networks adversarially becomes overburden. Moreover, the naive ensemble faces dilemma on choosing target model to generate adversarial examples with. Training adversarial examples of the members causes covariate shift, while training those of ensemble diminishes the benefit of ensembling. With these insights, we adopt stochastic weight average methods and improve it by considering overfitting nature of adversarial training. Our method take the benefit of ensemble while avoiding the described problems. Experiments on CIFAR10 and CIFAR100 shows our method improves the robustness effectively.