2017
DOI: 10.48550/arxiv.1703.00410
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detecting Adversarial Samples from Artifacts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
450
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 256 publications
(454 citation statements)
references
References 0 publications
3
450
0
1
Order By: Relevance
“…Adversarial samples are generated by using the same set (and same settings) of attacks used by Lee et al (2018) Comparison with SOTA and results. We compare the performance of iDECODe with supervised detectors such as LID (Ma et al 2018), Mahala (Lee et al 2018) and a detector based on combining of kernel density estimation (Feinman et al 2017) and predictive uncertainty (KD+PU). These detectors are trained on adversarial samples generated by FGSM attack.…”
Section: Detection Of Adversarial Samplesmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial samples are generated by using the same set (and same settings) of attacks used by Lee et al (2018) Comparison with SOTA and results. We compare the performance of iDECODe with supervised detectors such as LID (Ma et al 2018), Mahala (Lee et al 2018) and a detector based on combining of kernel density estimation (Feinman et al 2017) and predictive uncertainty (KD+PU). These detectors are trained on adversarial samples generated by FGSM attack.…”
Section: Detection Of Adversarial Samplesmentioning
confidence: 99%
“…We compare with KD+PU, LID and Mahala as the supervised adversarial detectors. Kernel Density (KD) (Feinman et al 2017) detects datapoints lying in the tqbl/ood audio. low Guassian density regions as adversarial.…”
Section: C3 Adversarialmentioning
confidence: 99%
“…Another line of research is k-nearest neighbors (kNN) [1,5,14,41] based methods, which focus on distinguishing adversarial examples based on their relationship to other clean images in the calibrated dataset. Furthermore, a family of anomalydetection approaches [60] has been deployed to detect adversarial examples: Feinman et al [16] model the clean distribution with kernel density estimation. Ma et al [33] characterize the dimensional properties of the adversarial features by local intrinsic dimensionality.…”
Section: Adversarial Defensesmentioning
confidence: 99%
“…Adversarial detectors. In experiments, we choose kernel density (KD) [16], local intrinsic dimensionality (LID) [33], Mahalanobis distance (MAHA) [26], and deep neural net-work (DNN) [37] as our baselines. The parameters for KD, LID and MAHA are set per original papers.…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation