2020
DOI: 10.1007/978-3-030-47436-2_10
|View full text |Cite
|
Sign up to set email alerts
|

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Abstract: Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, fooling ability of adversarial p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Fooling Rate. The expected number of times the attack is able to flip the label, known as fooling rate, denoted by FR [219], measures the percentage of success for an adversarial attack, formally defined [220]:…”
Section: A Evaluation Metricsmentioning
confidence: 99%
“…Fooling Rate. The expected number of times the attack is able to flip the label, known as fooling rate, denoted by FR [219], measures the percentage of success for an adversarial attack, formally defined [220]:…”
Section: A Evaluation Metricsmentioning
confidence: 99%
“…Data-based computer vision attacks depend on training and validation dataset to craft adversaries, while datafree attacks rely on other signals. There are some data-free approaches in computer vision, for example, by maximizing activations at each layer (Mopuri, Garg, and Radhakrishnan 2017;Mopuri, Ganeshan, and Babu 2018), class activations (Mopuri, Uppala, and Babu 2018), and pretrained models and proxy dataset (Huan et al 2020). However, there has been no work in NLP systems for data-free attacks.…”
Section: Related Workmentioning
confidence: 99%
“…They have also no access of the datasets used for the training. Huan et al [8] showed that even in these conditions, many current models are still at risk. In contrast, white-box attacks may use any of those elements to perform the attack.…”
Section: Related Workmentioning
confidence: 99%