2014 IEEE International Conference on Data Mining 2014
DOI: 10.1109/icdm.2014.117
|View full text |Cite
|
Sign up to set email alerts
|

On Sparse Feature Attacks in Adversarial Learning

Abstract: Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classifications for detecting spam email, network intrusion detection and credit card scoring. In fact as the gamut of application domains of machine learning grows, the possibility and opportunity for adversarial behavior will only increase.Till now, the standard assumption about modeling adversarial behavior has been to empower an adversary to change all features of the classifie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(22 citation statements)
references
References 15 publications
0
22
0
Order By: Relevance
“…As a result, the notion of "robustness" considered in [61] is rather different from that considered in [35] and [36], and in this paper. It is nevertheless of interest to understand whether methods that are more robust to evasion may also benefit from robustness to random perturbations, and vice versa.…”
Section: B Feature Selection Robustness and Stabilitymentioning
confidence: 79%
See 2 more Smart Citations
“…As a result, the notion of "robustness" considered in [61] is rather different from that considered in [35] and [36], and in this paper. It is nevertheless of interest to understand whether methods that are more robust to evasion may also benefit from robustness to random perturbations, and vice versa.…”
Section: B Feature Selection Robustness and Stabilitymentioning
confidence: 79%
“…However, since traditional feature selection methods implicitly assume that training and test samples follow the same underlying data distribution, their performance may be significantly affected under adversarial attacks that violate this assumption. Even worse, performing feature selection in adversarial settings may allow an attacker to evade the classifier at test time with a lower number of modifications to the malicious samples [11], [16], [35], [36]. To our knowledge, besides the above studies, the issue of selecting feature sets suitable for adversarial settings has neither been experimentally nor theoretically investigated more in depth.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This can be formalized by an applicationdependent constraint. As discussed in [16], two kinds of constraints have been mostly used when modeling real-world adversarial settings, leading one to define sparse ( 1 ) and dense ( 2 ) attacks. The 1 -norm yields typically a sparse attack, as it represents the case when the cost depends on the number of modified features.…”
Section: Attacker's Modelmentioning
confidence: 99%
“…For example, if instances are images, the attacker may prefer making small changes to many or even all pixels, rather than significantly modifying only few of them. This amounts to (slightly) blurring the image, instead of obtaining a salt-and-pepper noise effect (as the one produced by sparse attacks) [16].…”
Section: Attacker's Modelmentioning
confidence: 99%