Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-MMSDF-2.10
Paper Title IMPERCEPTIBLE ADVERSARIAL EXAMPLES FOR FAKE IMAGE DETECTION
Authors Quanyu Liao, Chengdu University of Information Technology, China; Yuezun Li, University at Buffalo, United States; Xin Wang, Bin Kong, Keya Medical, United States; Bin Zhu, Microsoft Research Asia, China; Siwei Lyu, University at Buffalo, United States; Youbing Yin, Qi Song, Keya Medical, United States; Xi Wu, Chengdu University of Information Technology, China
SessionSS-MMSDF-2: Special Session: AI for Multimedia Security and Deepfake 2
LocationArea A
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Fooling people with highly realistic fake images generated with Deepfake or GANs brings a great social disturbance to our society. Many methods have been proposed to detect fake images, but they are vulnerable to adversarial perturbations -- intentionally designed noises that can lead to the wrong prediction. Existing methods of attacking fake image detectors usually generate adversarial perturbations to perturb almost the entire image. This is redundant and increases the perceptibility of perturbations. In this paper, we propose a novel method to disrupt the fake image detection by determining key pixels to a fake image detector and attacking only the key pixels, which results in the $L_0$ and the $L_2$ norms of adversarial perturbations much less than those of existing works. Experiments on two public datasets with three fake image detectors indicate that our proposed method achieves state-of-the-art performance in both white-box and black-box attacks.