2020
DOI: 10.1002/int.22267
|View full text |Cite
|
Sign up to set email alerts
|

An efficient framework for generating robust adversarial examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…Though the black‐box attack on these seven websites requires a relatively large number of queries, the number of queries on the other 298 websites is comparable with that of white‐box attack. We remark that state‐of‐the‐art black‐box attacks on image classifiers, 16 PDF malware classifiers, 32 and speaker classifiers 15 also require a much larger number of queries, due to the lack of knowledge of the classifier.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Though the black‐box attack on these seven websites requires a relatively large number of queries, the number of queries on the other 298 websites is comparable with that of white‐box attack. We remark that state‐of‐the‐art black‐box attacks on image classifiers, 16 PDF malware classifiers, 32 and speaker classifiers 15 also require a much larger number of queries, due to the lack of knowledge of the classifier.…”
Section: Methodsmentioning
confidence: 99%
“…Prior research shows that ML‐based classifiers are vulnerable to evasion attacks, for example 12–19 . Such attacks have been extensively studied in image recognition and malware detection, but little has done in anti‐phishing.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from traditional media, the supervision of massive digital media contents demands efficient deep neural networks (DNNs). [7][8][9][10][11][12] However, a large number of existing works show that DNNs are surprisingly vulnerable to adversarial examples, [13][14][15] resulting in security risks. For example, as shown in Figure 1, by intentionally adding humanimperceptible perturbations on illegal or sensitive images, they can fool state-of-the-art supervision models with erroneous predictions, which leads to negative impacts on information security in media convergence.…”
Section: Introductionmentioning
confidence: 99%
“…However, these strategies either reduce the generalizability of the model or are not effective over some adversarial examples. Complete defense methods do not explicitly detect adversarial examples; as a result, they remain vulnerable to stronger attacks 12,13 . For another type of defense algorithm, detection only 14–16 uses the differences between adversarial examples and the original images to detect potential adversarial examples and reject their further processing.…”
Section: Introductionmentioning
confidence: 99%