2015
DOI: 10.2139/ssrn.2694326
|View full text |Cite
|
Sign up to set email alerts
|

Machines Learning Justice: The Case for Judgmental Bootstrapping of Legal Decisions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…We believe that, properly applied, algorithms can not only make more accurate predictions, but offer increased transparency and fairness over their human counterparts (cf. [23]).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We believe that, properly applied, algorithms can not only make more accurate predictions, but offer increased transparency and fairness over their human counterparts (cf. [23]).…”
Section: Discussionmentioning
confidence: 99%
“…The role of extraneous and ethically inappropriate factors in human decision making is well documented (see, for example, Tversky and Kahneman [1974]; Danziger, Levav, and Avnaim-Pesso [2011]; Abrams, Bertrand, and Mullainathan [2012]), and discriminatory decision making is pervasive in many of the sectors where algorithmic profiling might be introduced (see, for example, Holmes and Horvitz [1994]; Bowen and Bok [1998]). We believe that, properly applied, algorithms can not only make more accurate predictions, but offer increased transparency and fairness over their human counterparts (Laqueur and Copus 2015). 9 Above all else, the GDPR is a vital acknowledgement that, when algorithms are deployed in society, few if any decisions are purely "technical."…”
Section: Discussionmentioning
confidence: 99%
“…Researchers in Australia and Canada have thus publicized data to encourage judges and courts to audit their decision making to address cognitive and social biases. 31 Other scholars have advocated for using forms of AI to crowd-source problems of evidence in refugee decision making amongst judges 32 or to resolve epistemic doubt in favor of the applicant. 33 The above examples point to ways that digitization may equally reconstitute our understandings of what types of evidence are relevant for RSD as a complex process where human decision making is also itself prone to significant bias and information gaps.…”
Section: Digital Evidence To Address Epistemic Injustice In Rsdmentioning
confidence: 99%