Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science 2016
DOI: 10.1145/2840728.2840730
|View full text |Cite
|
Sign up to set email alerts
|

Strategic Classification

Abstract: Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior-often referred to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
222
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 209 publications
(228 citation statements)
references
References 10 publications
5
222
0
Order By: Relevance
“…This is a crucial distinction in highstakes decision making as different error types present asymmetric incentives for individuals, as explained in Section 2; for example, a high false positive rate in hiring would encourage underqualified job applicants. Hu et al [2019] and Milli et al [2019] study the disparate impact of being robust towards strategic manipulation [see e.g., Hardt et al, 2016a], where individuals respond to machine learning systems by manipulating their features to get a better classification. In contrast to our model ( Figure 5), their setting models the individual as intervening directly on their features, X, and this is assumed to have no effect on their qualification Y .…”
Section: Related Workmentioning
confidence: 99%
“…This is a crucial distinction in highstakes decision making as different error types present asymmetric incentives for individuals, as explained in Section 2; for example, a high false positive rate in hiring would encourage underqualified job applicants. Hu et al [2019] and Milli et al [2019] study the disparate impact of being robust towards strategic manipulation [see e.g., Hardt et al, 2016a], where individuals respond to machine learning systems by manipulating their features to get a better classification. In contrast to our model ( Figure 5), their setting models the individual as intervening directly on their features, X, and this is assumed to have no effect on their qualification Y .…”
Section: Related Workmentioning
confidence: 99%
“…Adversary Attack Model Validation domain with adversary Zero Sum Games [11] Full information about learner's Causative attacks Spam filtering utility, cost and classifier parameters [35,39,31,57] No information about learner's Exploratory attacks by changing Spam filtering utility, costs and classifier parameters values of future input [17,12,52] No information about learner's Exploratory attacks by removing Spam filtering utility, costs and classifier parameters features from future input [23] With and without info. about prob.…”
Section: Initial Information About Learnermentioning
confidence: 99%
“…In these works, there often is no cost for reporting one's data and the data analyst doesn't use monetary payments. These works attempt to design or identify mechanisms (inference or learning processes) that are robust to potential data manipulations [Dekel et al, 2010, Meir et al, 2011, Perote and Perote-Peña, 2003, Hardt et al, 2016, Dong et al, 2017, Chen et al, 2018b].…”
Section: Other Related Workmentioning
confidence: 99%