2017
DOI: 10.2139/ssrn.3145308
|View full text |Cite
|
Sign up to set email alerts
|

Hope, Hype, and Fear: The Promise and Potential Pitfalls of the Big Data Era in Criminal Justice

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 5 publications
0
16
0
Order By: Relevance
“…Algorithmic decision systems (Isaac, 2017) are increasingly common within the US criminal justice system despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing (Angwin et al, 2016;Lum and Isaac, 2016;Richardson et al, 2019). Beyond the domain of criminal justice, there are numerous instances of predictive algorithms perpetuating social harms in everyday interactions, including examples of facial recognition systems failing to detect Black faces and perpetuating gender stereotypes (Buolamwini and Gebru, 2018;Keyes, 2018;Stark, 2019), hate speech detection algorithms identifying Black and queer vernacular as toxic (Sap et al, 2019), new recruitment tools discriminating against women (Dastin, 2018), automated airport screening-systems systematically flagging trans bodies for security checks (Costanza-Chock, 2018), and predictive algorithms used to purport that queerness can be identified from facial images alone (Agüera y Arcas et al, 2018).…”
Section: Site 1: Algorithmic Decision Systemsmentioning
confidence: 99%
“…Algorithmic decision systems (Isaac, 2017) are increasingly common within the US criminal justice system despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing (Angwin et al, 2016;Lum and Isaac, 2016;Richardson et al, 2019). Beyond the domain of criminal justice, there are numerous instances of predictive algorithms perpetuating social harms in everyday interactions, including examples of facial recognition systems failing to detect Black faces and perpetuating gender stereotypes (Buolamwini and Gebru, 2018;Keyes, 2018;Stark, 2019), hate speech detection algorithms identifying Black and queer vernacular as toxic (Sap et al, 2019), new recruitment tools discriminating against women (Dastin, 2018), automated airport screening-systems systematically flagging trans bodies for security checks (Costanza-Chock, 2018), and predictive algorithms used to purport that queerness can be identified from facial images alone (Agüera y Arcas et al, 2018).…”
Section: Site 1: Algorithmic Decision Systemsmentioning
confidence: 99%
“…In response, Equivant published a technical report [41] refuting the claims of bias made by ProPublica and concluded that COMPAS is sufficiently calibrated, in the sense that it satisfies Predictive Parity at key thresholds. As previous research has shown [81,113,160], modern policing tactics center around targeting a small number of neighborhoods-often disproportionately populated by non-white and low income residents-with recurring patrols and stops. This uneven distribution of police attention, as well as other factors such as funding for pretrial services [101,168], can be rephrased in the language of CBNs as indicating the presence of a direct path 𝐴 → 𝑌 (through unobserved neighborhood) in the CBN representing the data-generation mechanism, as well as an influence of 𝐴 on 𝑌 through the set of variables containing number of prior arrests that are used to form a prediction Ŷ of 𝑌 .…”
Section: Causal Bayesian Network: a Visual Tool For (Un)fairnessmentioning
confidence: 99%
“…These data often reflect and directly encode some of the social, gender, and racial biases of societies. For instance, the U.S. criminal justice system increasingly relies on AI in its decision processes (Isaac 2017). These algorithms, however, are trained on criminal data sets that show patterns of discriminatory policing (Angwin et al 2017, Richardson andSchultz 2019).…”
Section: Responsible Service Analyticsmentioning
confidence: 99%