2022
DOI: 10.2139/ssrn.4080058
|View full text |Cite
|
Sign up to set email alerts
|

Data Justice in Practice: A Guide for Developers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…This means that the very acts of devising the statistical problem and of translating project goals into measurable proxies can introduce structural biases which may ultimately lead to discriminatory harm. [53] Likewise, at the Preprocessing & Feature Engineering stage, human decisions about how to group or separate input features (e.g. how to carve up categories of gender or ethnic groups) or about which input features to exclude altogether (e.g.…”
Section: Key Concept Model Developmentmentioning
confidence: 99%
“…This means that the very acts of devising the statistical problem and of translating project goals into measurable proxies can introduce structural biases which may ultimately lead to discriminatory harm. [53] Likewise, at the Preprocessing & Feature Engineering stage, human decisions about how to group or separate input features (e.g. how to carve up categories of gender or ethnic groups) or about which input features to exclude altogether (e.g.…”
Section: Key Concept Model Developmentmentioning
confidence: 99%
“…It can also help detect and mitigate discriminatory biases that may be buried within model architectures (Alikhademi et al, 2021;Zhao, Chen, et al, 2024;Zhou et al, 2020). Furnishing understandable and accessible explanations of the rationale behind system outputs can likewise help to establish the lawfulness of AI systems (e.g., their compliance with data protection law and equality law) (Chuang et al, 2024; ICO/Turing, 2020) as well as to ensure responsible and trustworthy implementation by system deployers, who are better equipped to grasp system capabilities, limitations, and flaws and to integrate system outputs into their own reasoning, judgment, and experience (ICO/Turing, 2020; Leslie, Rincón, et al, 2024). The provision of nontechnical, plain-language AI explanations also helps both to establish justified trust among impacted people and to ensure paths to actionable recourse for them when things go wrong (Ferrario & Loi, 2022;Liao et al, 2022;Luo & Specia, 2024).…”
Section: Risks From Model Scaling: Model Opacity and Complexitymentioning
confidence: 99%
“…Verifying use of safety measures post-deployment. In safety-critical settings, regulators may want to ensure that safety measures, for example, output filters, are applied to AI models or their outputs (see, for example, Dong et al, 2024a;Leslie et al, 2024;Welbl et al, 2021). Enforcing this may require methods for auditing systems deployed in such domains to check that they do in fact have safeguards that meet these specifications.…”
Section: Verifiable Auditsmentioning
confidence: 99%