Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-MMSDF-1.10
Paper Title MODEL-AGNOSTIC ADVERSARIAL EXAMPLE DETECTION THROUGH LOGIT DISTRIBUTION LEARNING
Authors Yaopeng Wang, Lehui Xie, Ximeng Liu, Jia-Li Yin, Tingjie Zheng, Fuzhou University, China
SessionSS-MMSDF-1: Special Session: AI for Multimedia Security and Deepfake 1
LocationArea B
Session Time:Monday, 20 September, 15:30 - 17:00
Presentation Time:Monday, 20 September, 15:30 - 17:00
Presentation Poster
Topic Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Recent research on vision-based tasks has achieved great improvement due to the development of deep learning solutions. However, deep models have been found vulnerable to adversarial attacks where the original inputs are maliciously manipulated and cause dramatic shifts to the outputs. In this paper, we focus on adversarial attacks in image classifiers built with deep neural networks and propose a model-agnostic approach to detect adversarial inputs. We argue that the logit semantics of adversarial inputs follow a different evolution with respect to original inputs, and construct a logits-based embedding of features for effective representation learning. We train an LSTM network to further analyze the sequence of logits-based features to detect adversarial examples. Experimental results on the MNIST, CIFAR-10, and CIFAR-100 datasets show that our method achieves state-of-the-art accuracy for detecting adversarial examples and has strong generalizability.