2019
DOI: 10.2139/ssrn.3452030
|View full text |Cite
|
Sign up to set email alerts
|

Human Judgement in Algorithmic Loops; Individual Justice and Automated Decision-Making

Abstract: There are various arguments in favour of tempering algorithmic decision-making with human judgement. One common family of arguments appeal to concepts and criteria derived from legal philosophy about the nature of law and legal reasoning, and argue that algorithmic systems cannot satisfy them (but humans can). This paper argues that among the latter family of arguments, there is often an implicit appeal to the notion that each case needs to be assessed on its own merits, without comparison to or generalisation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 48 publications
0
13
0
Order By: Relevance
“…Assessing whether algorithms actually deliver on these points goes beyond the scope of this article. Instead, the focus here is on the concerns voiced in the literature regarding the compatibility of algorithmic decision-making with principles of just and fair administration (Binns, 2019;Simmons, 2019). For example, studies in policing and criminal justice have argued that algorithmic decisions tend to reproduce bias towards already over-policed areas and target groups (Hannah-Moffat, 2016;Harcourt, 2007;Van Eijk, 2017).…”
Section: Algorithms As a Rationalizing Forcementioning
confidence: 99%
See 1 more Smart Citation
“…Assessing whether algorithms actually deliver on these points goes beyond the scope of this article. Instead, the focus here is on the concerns voiced in the literature regarding the compatibility of algorithmic decision-making with principles of just and fair administration (Binns, 2019;Simmons, 2019). For example, studies in policing and criminal justice have argued that algorithmic decisions tend to reproduce bias towards already over-policed areas and target groups (Hannah-Moffat, 2016;Harcourt, 2007;Van Eijk, 2017).…”
Section: Algorithms As a Rationalizing Forcementioning
confidence: 99%
“…A second major concern is the reduction of the discretionary space for human decision-makers to override algorithmic decisions (Citron & Pasquale, 2014;Zouridis et al, 2020). 'Keeping humans in the loop' (Zarsky, 2011) is seen as crucial to assure individual administrative justice, to override potential errors and to adapt decisions to specific circumstances, as is also common practice in classic 'analogue' decision-making through regulated street-level discretion (Binns 2019;Van Eck, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Much of this work has focused on AI's potential impact on the economy, the labor market, and national defense (Frank et al, 2019;Frey & Osborne, 2017;Korinek & Stiglitz, 2017;McClure, 2018). Another vein of related work focuses on the use of machine learning-based decision support systems in criminal justice for bail setting and sentencing (Binns, 2019;Hannah-Moffat, 2013Hannah-Moffat et al, 2009). Less attention, however, has been paid to AI's implications for public organizations and the governance of the public sector more broadly.…”
Section: Introductionmentioning
confidence: 99%
“…The frontline workers' exercise of discretion, or how they make decisions within their job constraints, has been captivating scholarly interest for decades (e.g., Lipsky 2010). The introduction of algorithms at work only added to the complexity of the workers' tasks and the importance of this topic; these workers are simultaneously managing numerous policies and programs, trying to live up to social expectations, and coping with the constraints imposed by increasingly complex digital tools (Binns 2019;Hupe 2019;Raso 2017).…”
Section: Introductionmentioning
confidence: 99%