2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00031
|View full text |Cite
|
Sign up to set email alerts
|

DaST: Data-Free Substitute Training for Adversarial Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
80
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 105 publications
(86 citation statements)
references
References 9 publications
0
80
0
Order By: Relevance
“…Recently, this topic has attracted attention in image classification [18,30,32,46] and text classification [20,31]. In this work, we show that model extraction attacks also pose a threat to sequential recommender systems.…”
Section: Introductionmentioning
confidence: 69%
“…Recently, this topic has attracted attention in image classification [18,30,32,46] and text classification [20,31]. In this work, we show that model extraction attacks also pose a threat to sequential recommender systems.…”
Section: Introductionmentioning
confidence: 69%
“…In contrast to this, black-box attacks do not require directly knowing the model and trained parameters. Black-box attacks rely on alternative information like query access to the classifier [2], knowing the training dataset [1], or transferring adversarial examples from one trained classifier to another [9].…”
Section: Introductionmentioning
confidence: 99%
“…Based on gradient descent and L-norm optimization methods, Liu et al [6] first presented the method to exploit a malware-based visual detector to adversarial example attacks. Zhou et al [7] were the first to propose an alternative model for training adversarial attacks without real data.…”
Section: Introductionmentioning
confidence: 99%