Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010
DOI: 10.1007/s10994-010-5199-2
|View full text |Cite
|
Sign up to set email alerts
|

Mining adversarial patterns via regularized loss minimization

Abstract: Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However, in several adversarial settings, the test set is deliberately constructed in order to increase the error rates of the classifier. A prominent example is spam email where words are transformed to get around word based features embedded in a spam filter.In this paper we model the interaction between a data miner and an adversary as a Stackelberg game with convex loss functions. We s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
38
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 50 publications
(46 citation statements)
references
References 16 publications
(23 reference statements)
1
38
0
Order By: Relevance
“…Later on such an equilibrium is used to choose optimal set of attributes that give good equilibrium performance. Improved models in which Nash strategies are played have also been proposed [4,18].…”
Section: Related Workmentioning
confidence: 99%
“…Later on such an equilibrium is used to choose optimal set of attributes that give good equilibrium performance. Improved models in which Nash strategies are played have also been proposed [4,18].…”
Section: Related Workmentioning
confidence: 99%
“…They assume the two players know each other's payoff function. Similar work with improvement on how Nash strategies are played has also been proposed [8], [9]. Brückner and Scheffer [10] present an optimal game by assuming the adversaries always behave rationally.…”
Section: Related Workmentioning
confidence: 99%
“…However, in many situations there exists an adversary (such as a spammer) who manipulates the training data distribution (e.g., spam emails) in a way as to attack the classifiers (spam detectors). This is a scenario that challenges the assumption made in most traditional classifiers, and thus has motivated the research advanced of adversarial learning [1][2][3][4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…Liu et al [4] formulated the interaction between a data miner and an adversary as a Zero-sum game, where the adversary is the leader and the data miner is the follower. However, the Zero-sum game indicates that the model assumes the adversary is being antagonistic against the data miner.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation