2018
DOI: 10.1016/j.patcog.2018.07.023
|View full text |Cite
|
Sign up to set email alerts
|

Wild patterns: Ten years after the rise of adversarial machine learning

Abstract: Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
1,003
0
6

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 1,031 publications
(1,009 citation statements)
references
References 80 publications
(267 reference statements)
0
1,003
0
6
Order By: Relevance
“…As machine learning becomes more ubiquitous in applications so too do attacks on the learning algorithms that they are based on [5][6][7][8][9][10][11][12][14][15][16][17]. The key assumption usually made in machine learning is that the training data is independent of the model and the training process.…”
Section: Adversarial Quantum Machine Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…As machine learning becomes more ubiquitous in applications so too do attacks on the learning algorithms that they are based on [5][6][7][8][9][10][11][12][14][15][16][17]. The key assumption usually made in machine learning is that the training data is independent of the model and the training process.…”
Section: Adversarial Quantum Machine Learningmentioning
confidence: 99%
“…While a laundry list of attacks are known against machine learning systems, the defences that have been developed thus far are somewhat limited [9,43]. A commonly used tactic is to replace classifiers with robust versions of the same classifier.…”
Section: Adversarial Quantum Machine Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the immense impact of deep learning on a diversity of fields, its vulnerability to tiny adversarial perturbations [1,2] is of great concern. For image datasets, for example, such perturbations are almost imperceptible for humans, but they can render state-of-the-art models useless, causing misclassification with high confidence.…”
Section: Introductionmentioning
confidence: 99%
“…Attacks: State of the art ∞ bounded attacks (used in our evaluations) are all based on gradient ascent on the cost function in (1). The Fast Gradient Sign Method (FGSM) [3], computes the perturbation by e = · sign(∇ x L(θ, x, y))…”
Section: Introductionmentioning
confidence: 99%