2017
DOI: 10.48550/arxiv.1712.08250
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ReabsNet: Detecting and Revising Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Finally, Chen et al [28] have elaborated a detection and reforming architecture called ReabsNet. When Reabsnet receives an image x, it uses an ADM (represented by a DNN trained by adversarial training) to check whether x is legitimate or adversarial.…”
Section: Auxiliary Detection Models (Adms)mentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, Chen et al [28] have elaborated a detection and reforming architecture called ReabsNet. When Reabsnet receives an image x, it uses an ADM (represented by a DNN trained by adversarial training) to check whether x is legitimate or adversarial.…”
Section: Auxiliary Detection Models (Adms)mentioning
confidence: 99%
“…et al, Grosse et al, Metzen et al and Chen et al[28,62,68,127] have proposed defenses based on ADMs.Grosse et al[68] have adapted an application classifier f to also act as a ADM, training it in a dataset containing n + 1 classes. The procedure followed by the authors consists of generating adversarial images x ′ i for each legitimate image (x i , y j ) that belongs to the training set T , where i ≤ |T | × m (where m is the number of attack algorithms used) and j ≤ n. After the generation of adversarial images, it has been formed a new training set T 1 , where T 1 = T ∪{(x ′ i , n+1), i ≤ |T |×m}.…”
mentioning
confidence: 99%