2021
DOI: 10.1007/s43681-021-00117-5
|View full text |Cite
|
Sign up to set email alerts
|

AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms

Abstract: Organisations, governments, institutions and others across several jurisdictions are using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 49 publications
0
9
0
Order By: Relevance
“…Indeed, several studies have found this to be the case and have shown that close proximity and interactions between residents and the police can artificially inflate crime rates as the police are likely to encounter and observe more crimes than in locations that are not as heavily policed [19,35].…”
Section: Neo Classical Criminological Theoriesmentioning
confidence: 99%
See 2 more Smart Citations
“…Indeed, several studies have found this to be the case and have shown that close proximity and interactions between residents and the police can artificially inflate crime rates as the police are likely to encounter and observe more crimes than in locations that are not as heavily policed [19,35].…”
Section: Neo Classical Criminological Theoriesmentioning
confidence: 99%
“…A social implication is that locations populated by groups that are historically vulnerable to racially-biased overpolicing can consequently become exposed to even more over-policing and disproportionately high rates of criminalisation. Racially biased policing can compound the problem since it generates crime data that prompts algorithmic preidctive models to designate their geographical regions as 'high crime' and in need of enhanced policing activity [14,16,35]. Developers of such predictive algorithms and their proponents refute this [6,24], and argue that police dispatch to a predicted high crime location does not artificially inflate crime rates and trigger a positive feedback loop.…”
Section: Neo Classical Criminological Theoriesmentioning
confidence: 99%
See 1 more Smart Citation
“…It is however, worth noting that the aim of debiasing is to attain data neutrality and, as such, it is techreformist given the utopic assumption that technical components of an algorithm, in this case, underpinning data, can be neutral and unaffected by data collection and processing choices and general interpretations (see also Ugwudike 2021). It ignores structural conditions such as the digital capital with which the choices, preferences, views and values of those involved in creating algorithms and/or debiasing data permeate datasets and introduce often hidden biases, even when the data appear ostensibly neutral.…”
Section: Tech-reformism and Its Limitationsmentioning
confidence: 99%
“…Therefore, in order to prevent some important contents from being damaged and causing unnecessary loss or repeated use, it is necessary to save these key parts for viewing, calling and retrieving the application requirements of other relevant functional modules in the future query. At the same time, It ensures that users can safely access and process the required information [13][14]. Figure 1 Figure1.Distributed data base…”
Section: Distributed Databasementioning
confidence: 99%