2018
DOI: 10.1002/widm.1245
|View full text |Cite
|
Sign up to set email alerts
|

A dynamic‐adversarial mining approach to the security of machine learning

Abstract: Operating in a dynamic real‐world environment requires a forward thinking and adversarial aware design for classifiers beyond fitting the model to the training data. In such scenarios, it is necessary to make classifiers such that they are: (a) harder to evade, (b) easier to detect changes in the data distribution over time, and (c) be able to retrain and recover from model degradation. While most works in the security of machine learning have concentrated on the evasion resistance problem (a), there is little… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 59 publications
0
9
0
Order By: Relevance
“…For example, a new type of credit card fraud can appear that tries to circumvent existing fraud detection set in place. These changes in data streams, named concept drift (Pinage, dos Santos, & da Gama, 2016;Sethi, Kantardzic, Lyu, & Chen, 2018), often affect the underlying data distribution and reduce the performance of existing classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…For example, a new type of credit card fraud can appear that tries to circumvent existing fraud detection set in place. These changes in data streams, named concept drift (Pinage, dos Santos, & da Gama, 2016;Sethi, Kantardzic, Lyu, & Chen, 2018), often affect the underlying data distribution and reduce the performance of existing classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…[36] suggested that even if an adversary knows the importance of features used by the deployed model, he or she will not be able to evade detection without knowing the ML algorithm used. However, this cannot stop determined adversaries from trying every way possible to accomplish their goals [23]. Furthermore, as stated in Ref.…”
Section: Adversarial Attacks Against Twitter Spam Detectorsmentioning
confidence: 98%
“…Consequently, considering the robustness of selected features and applying the disinformation method when designing a spam detector could help reduce the effect of adversaries' activities. However, this cannot stop determined adversaries from trying every way possible to accomplish their goals [23]. Furthermore, as stated in Ref.…”
Section: Defenses Against Exploratory Attacksmentioning
confidence: 98%
“…However, this cannot stop determined adversaries from trying every way possible to accomplish their goals [46]. Furthermore, as stated in [2], relying on obscurity in an adversarial environment is not a good security practice, as one should always overestimate rather than underestimate the adversary's capabilities.…”
Section: Defenses Against Exploratory Attacksmentioning
confidence: 99%
“…The traditional assumption of stationarity of data distribution in ML is that the dataset used for training a classifier (such as SVM or RF) and the testing data (the future data that will be classified) have a similar underlying distribution. This assumption is violated in the adversarial environment, as adversaries are able to manipulate data either during training or before testing [46,2].…”
Section: Introductionmentioning
confidence: 99%