2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533150
|View full text |Cite
|
Sign up to set email alerts
|

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(26 citation statements)
references
References 49 publications
0
25
0
Order By: Relevance
“…With increased awareness of the need to anticipate harms early in product development [172], designers and researchers are central actors in pursuing harm reduction [34,49,87]. Anticipating harms requires considering how technological affordances shape their use and impact [76,175].…”
Section: Identifying and Anticipating Harms In Practicementioning
confidence: 99%
“…With increased awareness of the need to anticipate harms early in product development [172], designers and researchers are central actors in pursuing harm reduction [34,49,87]. Anticipating harms requires considering how technological affordances shape their use and impact [76,175].…”
Section: Identifying and Anticipating Harms In Practicementioning
confidence: 99%
“…To borrow a phrase from Jack Balkin, such "code is lawless" [1, p. 52], since the unpredictability that results from non-determinism presents key problems for thinking of code as being constitutive of law. 15…”
Section: Non-deterministic Code Is Lawlessmentioning
confidence: 99%
“…In ML systems, we can have full access to both the inputs and subsequent outputs, while having no clear understanding of how the mapping from one to the other occurred. In other words, unlike the law, ML functions defy explanation and reasonable justification, which in turn raises fundamental questions about the legitimacy of using ML as a decision-making tool and muddies the ability to determine accountability when these tools cause harms [15]. In short, ML's problem with "explainability" shows how the analogy essentially and inescapably falls short; both the law and ML may behave like functions, but functions that are fundamentally different in kind.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to these regulatory discussions, academic researchers surface the impact of algorithmic systems in public sector and call for algorithmic accountability (Barocas and Selbst, 2016;Calo and Citron 2021;Cooper et al 2022;Crump, 2016;Diakopoulos, 2014;Eubanks, 2018;Kroll et al 2017;O'Neil 2016;Pasquale, 2015;Richardson et al 2019;Schwartz, 1992;Veale et al 2018;Young et al 2019), and impact assessments to be mandatory (A Civil Society Statement 2021; Ada Lovelace Institute 2021; Kaminski and Malgieri 2019;Reisman et al 2018). A robust literature identifies the need for transparency and public disclosures.…”
mentioning
confidence: 99%