2021
DOI: 10.1007/s11292-021-09484-9
|View full text |Cite
|
Sign up to set email alerts
|

Artificial fairness? Trust in algorithmic police decision-making

Abstract: Objectives Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing. Methods An online… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 57 publications
(78 reference statements)
0
6
0
Order By: Relevance
“…Contextual inquiry methods would seem particularly appropriate for understanding experiences and reactions to technologically mediated contact encounters, alongside approaches that explore differences within and among different demographics and user types that make up the 'public' end user. Online survey experiments (Hobson et al, 2021), combined with follow-up interviews, could provide another way explore the challenges and opportunities of public experience of digital systems. Fundamentally, however, it is also crucial to engage with the service designers, strategists and leaders within policing who are currently driving delivery, and within the technology companies and designers who provide the infrastructures underpinning technologically mediated contacts.…”
Section: Methodological Reflectionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Contextual inquiry methods would seem particularly appropriate for understanding experiences and reactions to technologically mediated contact encounters, alongside approaches that explore differences within and among different demographics and user types that make up the 'public' end user. Online survey experiments (Hobson et al, 2021), combined with follow-up interviews, could provide another way explore the challenges and opportunities of public experience of digital systems. Fundamentally, however, it is also crucial to engage with the service designers, strategists and leaders within policing who are currently driving delivery, and within the technology companies and designers who provide the infrastructures underpinning technologically mediated contacts.…”
Section: Methodological Reflectionsmentioning
confidence: 99%
“…Conversely, studies have also found that people may see algorithmic decisions as less fair and appropriate than police officer decisions (Hobson et al, 2021). Dietvorst et al (2015) use the phrase ‘algorithmic aversion’ to describe a complex set of reactions to AI, which Burton et al (2018) argue include: false expectations that affect responses to algorithmic decision-making (for example, the idea that error is systematic, ‘baked in’ and irreparable); concerns about decision control and a general sense that the decision-maker cannot be considered trustworthy; and an emphasis on the need for human decision-making in contexts marked by uncertainty.…”
Section: Interacting With Technologymentioning
confidence: 99%
“…sions made by human experts (Araujo et al 2020). Others, however, suggest a preference for decisions made by human actors in a policing context (Hobson et al 2023). Human decisions to accept or reject AI suggestions are furthermore not only contingent on the confidence vested in the system but also on personal self-confidence (Chong et al 2022).…”
Section: Epistemological Criticismmentioning
confidence: 99%
“…Studies on the perception of procedural fairness when AI is involved (not necessarily in the context of courts; see, e.g., Woodruff et al 2018 ; Lee 2018 ; Lee et al 2019 ; Saxena et al 2019 ) 19 yield some mixed results. Some studies find that algorithms are seen as less fair than humans (Newman et al 2020 ; Hobson et al 2021 ), e.g., because they lack intuition and subjective judgment capabilities. Other studies, however, find that that the difference in perceived procedural fairness of human decision-makers versus algorithmic decision-makers is task-dependent (e.g., Lee 2018 ).…”
Section: Related Literaturementioning
confidence: 99%