2019
DOI: 10.1016/j.ijar.2019.07.003
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial classification: An adversarial risk analysis approach

Abstract: Classification techniques are widely used in security settings in which data can be deliberately manipulated by an adversary trying to evade detection and achieve some benefit. However, traditional classification systems are not robust to such data modifications. Most attempts to enhance classification algorithms in adversarial environments have focused on game theoretical ideas under strong underlying common knowledge assumptions, which are not actually realistic in security domains. We provide an alternative… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
24
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 31 publications
2
24
0
Order By: Relevance
“…Interestingly, in the case of the naïve Bayes classifier, our ARA approach outperforms the classifier under raw untainted data (Table 1). This effect has been already observed by Naveiro et al [2019] and for other algorithms and application areas. A possible explanation is that taking into account the presence of an adversary has a regularizing effect, being able to improve the original accuracy of the base algorithm and making it more robust.…”
Section: Ara Defense In Spam Detection Problemssupporting
confidence: 70%
See 1 more Smart Citation
“…Interestingly, in the case of the naïve Bayes classifier, our ARA approach outperforms the classifier under raw untainted data (Table 1). This effect has been already observed by Naveiro et al [2019] and for other algorithms and application areas. A possible explanation is that taking into account the presence of an adversary has a regularizing effect, being able to improve the original accuracy of the base algorithm and making it more robust.…”
Section: Ara Defense In Spam Detection Problemssupporting
confidence: 70%
“…After reviewing key developments in game-theoretic approaches to AC in Sections 2 and 3, in Sections 4 and 5 we cover novel techniques based on Adversarial Risk Analysis (ARA) [Rios Insua et al 2009], which do not assume standard common knowledge hypothesis. In thi, we unify, expand and improve upon earlier work in Naveiro et al [2019] and Gallego et al [2020]. Our focus will be on binary classification problems in face only of exploratory attacks, defined to have influence over operational data but not over training ones.…”
Section: Introductionmentioning
confidence: 97%
“…Interestingly, in the case of the naïve Bayes classifier, our ARA approach outperforms the classifier under raw untainted data ( Table 1). This effect has been observed also in [12,33] for other algorithms and application areas. This is likely due to the fact that the presence of an adversary has a regularizing effect, being able to improve the original accuracy of the base algorithm and making it more robust.…”
Section: Ara Defense In Spam Detection Problemssupporting
confidence: 69%
“…Their key advantage is that they do not assume strong common knowledge hypothesis concerning belief and preference sharing, as with standard game theoretic approaches to AML. In this, we unify, expand and improve upon earlier work in [12,13]. Our focus will be on binary classification problems in face only of exploratory attacks, defined to have influence over operational data but not over training ones.…”
Section: Introductionmentioning
confidence: 99%
“…Another possible line of work would be to extend the framework to deal with Bayesian Stackelberg games, that are widely used to model situations in AML in which there is not common knowledge of the adversary's parameters. In this line, the ultimate goal would be to apply the proposed algorithms to solve Adversarial Risk Analyisis (ARA, [25]) problems in AML, [22].…”
Section: Discussionmentioning
confidence: 99%