2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00472
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Biological Immunity to Adversarial Examples

Abstract: While deep learning continues to permeate through all fields of signal processing and machine learning, a critical exploit in these frameworks exists and remains unsolved. These exploits, or adversarial examples, are a type of signal attack that can change the output class of a classifier by perturbing the stimulus signal by an imperceptible amount. The attack takes advantage of statistical irregularities within the training data, where the added perturbations can "move" the image across deep learning decision… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 35 publications
1
10
0
Order By: Relevance
“…It is observed in [447] that multitask learning generally results in improving adversarial robustness of the models. Kim et al [448] claim that by leveraging sparsity and other perceptual biological mechanisms, adversarial robustness of models can be improved. Wang et al [449] studied how to calibrate a trained model in-situ, in order to analyze the achievable tradeoffs between the standard and robust accuracy of the model.…”
Section: E Miscellaneous Methodsmentioning
confidence: 99%
“…It is observed in [447] that multitask learning generally results in improving adversarial robustness of the models. Kim et al [448] claim that by leveraging sparsity and other perceptual biological mechanisms, adversarial robustness of models can be improved. Wang et al [449] studied how to calibrate a trained model in-situ, in order to analyze the achievable tradeoffs between the standard and robust accuracy of the model.…”
Section: E Miscellaneous Methodsmentioning
confidence: 99%
“…Rather, placing a superficial limitation on a peripheral processing stage attenuated a difference previously attributed to more central processes. Of course, this needn't mean that all human insensitivity to adversarial attacks will be explained in this way (72), but rather that approaches like this can reveal which behavioral differences have more superficial explanations and which have deeper origins. And although Elsayed et al (70) do not use the language of performance and competence, their work perfectly embodies that insight-namely, that fair comparisons must equate constraints.…”
Section: Limit Machines Like Humansmentioning
confidence: 99%
“…Although we do not provide decision-based attack results, other empirical work suggests that robustness in this regime can be improved with population nonlinearities, sparsity, and recurrence. For example, robustness to decision-based attacks has been shown by imposing sparsification ( Marzi, Gopalakrishnan, Madhow, & Pedarsani, 2018 ; Alexos, Panousis, & Chatzis, 2020 ), recurrence ( Krotov & Hopfield, 2018 ; Yan et al, 2019 ), and specifically with the LCA network ( Springer, Strauss, Thresher, Kim, & Kenyon, 2018 ; Kim, Yarnall, Shah, & Kenyon, 2019 ; Kim, Rego, Watkins, & Kenyon, 2020 ). We offer a theoretical explanation for these findings.…”
Section: Discussionmentioning
confidence: 99%