2022
DOI: 10.1007/s00146-022-01441-y
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic fairness through group parities? The case of COMPAS-SAPMOC

Abstract: Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different ste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Bias in data is a pervasive issue, as datasets often mirror the biases and inequalities present in society. Historical and societal biases can be ingrained in various domains, including criminal justice (see, for example, the case of COMPAS [44], employment [45], healthcare [46], and finance [47]. Biased data can result from historical discrimination, cultural stereotypes, or systemic inequalities, leading to under-representation or misrepresentation of certain groups.…”
Section: Bias and Data Fairness Issuesmentioning
confidence: 99%
“…Bias in data is a pervasive issue, as datasets often mirror the biases and inequalities present in society. Historical and societal biases can be ingrained in various domains, including criminal justice (see, for example, the case of COMPAS [44], employment [45], healthcare [46], and finance [47]. Biased data can result from historical discrimination, cultural stereotypes, or systemic inequalities, leading to under-representation or misrepresentation of certain groups.…”
Section: Bias and Data Fairness Issuesmentioning
confidence: 99%
“…It turns out that it is mathematically impossible to simultaneously satisfy both forms of fairness-calibration and classification-when the base rates of re-offending differ between groups (Berk et al, 2021;Lagioia et al, 2023). That is, if a greater share of Black people are classified as high risk-which the algorithm does in an unbiased manner-then it necessarily follows that a greater share of Black defendants who do not re-offend will also be mistakenly classified as high risk.…”
Section: The Post-modern Critiques Of Misinformation Researchmentioning
confidence: 99%
“…Researchers have also used Google TensorFlow to forecast crime hotspots and evaluated three options in the RNN (Zhuang Y, 2017) architecture: precision, accuracy and recall. A comparative study (McClendon L, Meghanathan N, 2015) between violent crime patterns was carried out using the open-source data mining software WEKA 27 . Here, three algorithms, namely, linear regression, additive regression and decision stump, were implemented to determine the efficiency and efficacy of the ML algorithms.…”
Section: Convergence Of Ai and Neuroprediction In Forensicsmentioning
confidence: 99%