Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-MMSDF-2.12
Paper Title A NEURO-INSPIRED AUTOENCODING DEFENSE AGAINST ADVERSARIAL ATTACKS
Authors Can Bakiskan, Metehan Cekic, Ahmet Sezer, Upamanyu Madhow, University of California, Santa Barbara, United States
SessionSS-MMSDF-2: Special Session: AI for Multimedia Security and Deepfake 2
LocationArea A
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Deep Neural Networks (DNNs) are vulnerable to adversarial attacks: carefully constructed perturbations to an image can seriously impair classification accuracy, while being imperceptible to humans. The most effective current defense is to train the network using adversarially perturbed examples. In this paper, we investigate a radically different, neuro-inspired defense mechanism, aiming to reject adversarial perturbations before they reach a classifier DNN, using an encoder with characteristics commonly observed in biological vision, followed by a decoder restoring image dimensions that can be cascaded with standard CNN architectures. Unlike adversarial training, all training is based on clean images. Our experiments on the CIFAR-10 and a subset of Imagenet datasets show performance competitive with state-of-the-art adversarial training, and point to the promise of bottom-up neuro-inspired techniques for the design of robust neural networks.