2021
DOI: 10.2139/ssrn.3899991
|View full text |Cite
|
Sign up to set email alerts
|

How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(36 citation statements)
references
References 0 publications
0
35
0
Order By: Relevance
“…It remains that such a self-regulation approach can be put at risk in terms of effectiveness. Providing firms with a wide margin of discretion may run counter to the objectives of regulation, as [39,40] show about the proposal for a European regulation on A.I. presented by the Commission in April 2021.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…It remains that such a self-regulation approach can be put at risk in terms of effectiveness. Providing firms with a wide margin of discretion may run counter to the objectives of regulation, as [39,40] show about the proposal for a European regulation on A.I. presented by the Commission in April 2021.…”
Section: Discussionmentioning
confidence: 99%
“…The risk of self-regulation is linked to the combination of vague criteria and the delegation of a wide margin of discretion. Smuha et al [39,40] propose an ex-ante certification of algorithms, which we could extend to a certification procedure for self-evaluation tools.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Standards are expected to appear in 2025presumably around the same time as the AI Act would come into force. 42 However, self-assessment has been criticised for its unreliability, cloudiness and discretionary nature and thus the strengthening of ex ante obligations has been strongly advocated (Smuha et al 2021). Delegation of rulemaking to ESOs is equally problematic since these bodies are governed by private law.…”
Section: Conformity Assessment As Per 'New Legislative Framework' And...mentioning
confidence: 99%
“…However, despite efforts at expanding transparency in this context, the new draft regulation falls significantly short in terms of mitigating informational disbalances. The proposed regulation adopts a risk-based approach -regulating "high-risk" AI systems 4 -and has been criticised for important shortcomings, not least among these, that the regulation of the high-risk systems centres on conformity assessment with a range of essential requirements based largely on a system of internal checks, that is, self-assessment by providers (Veale & Zuiderveen Borgesius 2021;Smuha et al 2021). As a result: "there are 'almost no situations' in which such industry AI self-assessments will require approval by an independent technical organization, and even then, such organizations are usually private sector certification firms accredited by Member States".…”
Section: Regulatory Efforts: the Eu Draft Ai Regulationmentioning
confidence: 99%