2017
DOI: 10.1007/978-3-319-57463-9_4
|View full text |Cite
|
Sign up to set email alerts
|

‘Security Theater’: On the Vulnerability of Classifiers to Exploratory Attacks

Abstract: The increasing scale and sophistication of cyber-attacks has led to the adoption of machine learning based classification techniques, at the core of cybersecurity systems. These techniques promise scale and accuracy, which traditional rule/signature based methods cannot. However, classifiers operating in adversarial domains are vulnerable to evasion attacks by an adversary, who is capable of learning the behavior of the system by employing intelligently crafted probes. Classification accuracy in such domains p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 23 publications
0
16
0
Order By: Relevance
“…e researchers [81,83,84] proposed different techniques to detect adversarial examples in the input and to create different benign and adversarial examples. As we mentioned earlier, the target of the attacker is to add more noise to formulate [47] Changes the discriminant results Resource consuming [50][51][52] Misclassifies positive sample Integrity attack [53] False negative passes through the system Easily detected [54][55][56] Availability attack [57] False positive results in blocking records Time and resource consuming [58][59][60] Privacy violation attack [61] Easily exploit the training dataset Its performance is not reliable as it based on iterations [62][63][64] Targeted attack [65] Misclassified to any arbitrary class It does not provide assurance about the generated samples [66][67][68] Indiscriminate attack [69] Good trade-off Perturbation is high [70,71] Highly efficient 10 Mobile Information Systems effective adversarial examples. According to [83], it is not easy to detect such adaptive attacks, and some detection techniques effectively work while some ineffective.…”
Section: Detecting Adversarial Examplesmentioning
confidence: 99%
“…e researchers [81,83,84] proposed different techniques to detect adversarial examples in the input and to create different benign and adversarial examples. As we mentioned earlier, the target of the attacker is to add more noise to formulate [47] Changes the discriminant results Resource consuming [50][51][52] Misclassifies positive sample Integrity attack [53] False negative passes through the system Easily detected [54][55][56] Availability attack [57] False positive results in blocking records Time and resource consuming [58][59][60] Privacy violation attack [61] Easily exploit the training dataset Its performance is not reliable as it based on iterations [62][63][64] Targeted attack [65] Misclassified to any arbitrary class It does not provide assurance about the generated samples [66][67][68] Indiscriminate attack [69] Good trade-off Perturbation is high [70,71] Highly efficient 10 Mobile Information Systems effective adversarial examples. According to [83], it is not easy to detect such adaptive attacks, and some detection techniques effectively work while some ineffective.…”
Section: Detecting Adversarial Examplesmentioning
confidence: 99%
“…Machine learning systems deployed in the real world are vulnerable to exploratory attacks, which aim to degrade the learned model over time. Exploratory attacks are launched by an adversary by first learning about the characteristics of the system through carefully crafted probes and then morphing suspicious samples to cause evasion at test time (Biggio et al, ; Biggio, Fumera, & Roli, , ; Papernot, McDaniel, & Goodfellow, ; Sethi et al, ; Tramèr et al, ). Such attacks are commonplace and difficult to avoid, as they rely on the same black box access to the systems that a benign user is entitled to.…”
Section: Related Work On Security Of Machine Learningmentioning
confidence: 99%
“…By measuring adversarial cost in terms of the number of features which need evasion, the efficacy of feature‐bagged models was demonstrated (Lowd & Meek, ), as robust models by design will require a majority of the features to be modified for a successful evasion. This reasoning is not valid for the case of an indiscriminate exploratory attack, where an adversary is interested in generating any sample which evades C without any predefined set of attack samples (Sethi et al, ). In this setting, the impact of feature‐bagged ensemble is similar to that of a simple model as seen from the evasion probability and adversarial certainty computations of Table .…”
Section: Adversarial Uncertainty—on the Ability To React To Attacks Imentioning
confidence: 99%
See 2 more Smart Citations