2020
DOI: 10.48550/arxiv.2004.07213
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
100
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 76 publications
(102 citation statements)
references
References 0 publications
0
100
0
Order By: Relevance
“…Cobbling together "a robust 'toolbox' of mechanisms to support the verification of claims about AI systems and development processes" (Brundage et al 2020), in this latter sense, leads, in AI ethics and governance, to a kind of functional tardiness of the governance strategies that result. Namely, it leads to an emphasis on narrowly-targeted methods such as "effective assessment" (Brundage et al 2020), "auditability" (Mökander and Floridi 2021;Raji et al 2020), "traceability" (Kroll 2021), and "reviewability" (Cobbe, Lee, and Singh 2021), that show up on the scene a moment too late. Such methods remain ex post facto and external to the innerworkings of sufficiently reflective and responsible modes of technology production and use.…”
Section: Setting the Stagementioning
confidence: 99%
See 4 more Smart Citations
“…Cobbling together "a robust 'toolbox' of mechanisms to support the verification of claims about AI systems and development processes" (Brundage et al 2020), in this latter sense, leads, in AI ethics and governance, to a kind of functional tardiness of the governance strategies that result. Namely, it leads to an emphasis on narrowly-targeted methods such as "effective assessment" (Brundage et al 2020), "auditability" (Mökander and Floridi 2021;Raji et al 2020), "traceability" (Kroll 2021), and "reviewability" (Cobbe, Lee, and Singh 2021), that show up on the scene a moment too late. Such methods remain ex post facto and external to the innerworkings of sufficiently reflective and responsible modes of technology production and use.…”
Section: Setting the Stagementioning
confidence: 99%
“…By this we mean research that seeks to support or develop mechanisms by which the processes and outcomes that characterise ML/AI research and innovation can be made, for example, more transparent, trustworthy, or responsible. Within this broader remit, there is research that includes general overviews or frameworks that support transparent reporting and communication (Mitchell et al 2019;Brundage et al 2020), specific (narrowly-focused) tools that support bias mitigation or algorithmic interpretability (Research 2018;PAIR 2020;Lundberg 2020;ICO 2020), as well as more focused extensions of assurance cases to address the specific challenges of ML (Ashmore, Calinescu, and Paterson 2019;Ward and Habli 2020;Habli et al 2020). Each of these approaches can play a valuable role individually, but collectively add up to a (currently) disorganised toolbox of practical mechanisms with little unifying purpose or direction.…”
Section: Existing Research In the Assurance Of ML Systemsmentioning
confidence: 99%
See 3 more Smart Citations