Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society 2022
DOI: 10.1145/3514094.3534181
|View full text |Cite
|
Sign up to set email alerts
|

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

Abstract: Much attention has focused on algorithmic audits and impact assessments to hold developers and users of algorithmic systems accountable. But existing algorithmic accountability policy approaches have neglected the lessons from non-algorithmic domains: notably, the importance of interventions that allow for the effective participation of third parties. Our paper synthesizes lessons from other fields on how to craft effective systems of external oversight for algorithmic deployments. First, we discuss the challe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(31 citation statements)
references
References 66 publications
0
12
0
Order By: Relevance
“…The size of the system and its components also determine auditability; the datasets that large generative AI systems are trained on are not only difficult to analyze at scale, but few tools exist to analyze large static datasets [19]. Formal audits alone cannot be the only insight or governance of a system [76].…”
Section: Auditabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…The size of the system and its components also determine auditability; the datasets that large generative AI systems are trained on are not only difficult to analyze at scale, but few tools exist to analyze large static datasets [19]. Formal audits alone cannot be the only insight or governance of a system [76].…”
Section: Auditabilitymentioning
confidence: 99%
“…Critically, actors in this space must have some incentive to engage in frequent community discussion and be held accountable to commitments for safe releases. Google's public position on responsible AI practices encourages in-house risk evaluation and mitigation, but conflicts of interest can result in internal critics being unable to share or publish findings [76] and dismissal [30]. These initiatives can also be formed as an industry argument for self-regulation, but ultimately lack external accountability.…”
Section: Multidisciplinary Discoursementioning
confidence: 99%
“…Disclosures seek to acknowledge the agency and autonomy of individuals. Calls for the requirement of disclosures in the context of AI systems appear in policy recommendations on algorithmic auditing (Costanza-Chock et al, 2022; Raji et al, 2022) and regulatory frameworks by the European Union (see articles 13 and 22 of the GDPR and articles 51, 52, and 60 of the AI Act), and US Congress (Klobuchar, 2018; Trahan, 2021). Meaningful disclosure affirms that the individual has final decision-making power in how they want to proceed.…”
Section: Dimension 5: Disclosure-centered Mediationmentioning
confidence: 99%
“…External model audit, i.e. model evaluation by an independent, external auditor for the purpose of providing a judgement -or input to a judgement -about the safety of deploying a model (or training a new one) (ARC Evals, 2023; Mökander et al, 2023;Raji et al, 2022b).…”
Section: Model Evaluation As Critical Governance Infrastructurementioning
confidence: 99%
“…(a) Surface emerging model behaviours and risks via monitoring efforts. This could include direct monitoring of inputs and outputs to the model, and systems for incident reporting (see Brundage et al, 2022;Raji et al, 2022b).…”
Section: Responsible Deploymentmentioning
confidence: 99%