2013
DOI: 10.1007/978-3-642-40994-3_25
|View full text |Cite
|
Sign up to set email alerts
|

Evasion Attacks against Machine Learning at Test Time

Abstract: Abstract. In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Fol… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
1,402
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 1,451 publications
(1,505 citation statements)
references
References 14 publications
1
1,402
0
2
Order By: Relevance
“…Two main attack scenarios are often considered in the field of adversarial machine learning, i.e., evasion and poisoning [4], [7], [57], [3], [2], [31], [6], [5]. In an evasion attack, the attacker manipulates malicious samples at test time to have them misclassified as legitimate by a trained classifier, without having influence over the training data.…”
Section: Attack Model and Scenariosmentioning
confidence: 99%
“…Two main attack scenarios are often considered in the field of adversarial machine learning, i.e., evasion and poisoning [4], [7], [57], [3], [2], [31], [6], [5]. In an evasion attack, the attacker manipulates malicious samples at test time to have them misclassified as legitimate by a trained classifier, without having influence over the training data.…”
Section: Attack Model and Scenariosmentioning
confidence: 99%
“…In previous work (Biggio et al, 2013), the success rate was about the same, even if they made a deeper analysis of it using various settings of their SVM.…”
Section: An Illustration Of the Results Of This Attack Is Shown Onmentioning
confidence: 90%
“…Several attacks have been proposed (e.g. (Ateniese et al, 2013;Biggio et al, 2013)). In (Biggio et al, 2013), the authors propose to evade SVM and Neural Network classifiers using gradientdescent algorithms.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this framework, given an attacker's cost function, the goal is to find a lowest attacker-cost instance that the classifier labels as negative (i.e., it passes through). Various other papers in the literature have used such a model [7], [28], [29]. Specifically, we consider how many tries does it take an adversary to compromise the security of a classifier.…”
Section: Attack Modelmentioning
confidence: 99%