2019
DOI: 10.2139/ssrn.3487454
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Technologies and Their Inherent Human Rights Issues in Criminal Justice Contexts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…Depending on the jurisdiction, there may be an orthogonal set of laws that aims to ensure the rights of the users of models created from the data. A number of prior works, particularly from a U.S.-centric perspective, have connected ML to legal frameworks for human rights, especially anti-discrimination [47,48,64,70,72,140,142,146]. These works often focus on the difficulties of constraining algorithmic discrimination in many contexts, proposing alternative legal frameworks that would allow for more regulatory enforcement of algorithmic bias.…”
Section: Legal Context: Rights and Regulationsmentioning
confidence: 99%
“…Depending on the jurisdiction, there may be an orthogonal set of laws that aims to ensure the rights of the users of models created from the data. A number of prior works, particularly from a U.S.-centric perspective, have connected ML to legal frameworks for human rights, especially anti-discrimination [47,48,64,70,72,140,142,146]. These works often focus on the difficulties of constraining algorithmic discrimination in many contexts, proposing alternative legal frameworks that would allow for more regulatory enforcement of algorithmic bias.…”
Section: Legal Context: Rights and Regulationsmentioning
confidence: 99%
“…However, what is sufficient for one user might be insufficient for another as their focus is on a different type of interpretability, or they may have different background knowledge (Doran et al 2017;Hirsch et al 2017). Transparency and interpretability can help to uncover any misbehavior of the algorithms such as inequality, unfairness, and even discrimination and racism (Grace 2019;Hübner 2021).…”
Section: Machine Learning From An End-user Perspectivementioning
confidence: 99%
“…For example, governments are currently attempting to steer how facial recognition technologies and algorithmic biases are used before they become ubiquitous (e.g. Grace, 2019). However, governments lack the insights into how these technologies will impact their own structures and activities.…”
Section: Box 15 Oecd's Work On Emerging Technologies and Regulationmentioning
confidence: 99%