2021
DOI: 10.9785/cri-2021-220402
|View full text |Cite
|
Sign up to set email alerts
|

Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
103
0
4

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 255 publications
(107 citation statements)
references
References 0 publications
0
103
0
4
Order By: Relevance
“…Explainable decision-making algorithms, for instance, may be required to declare their lack of agency, intentionality, and rationality alongside their explanations so that people are not influenced to hold them accountable. Such an approach would be similar to the existing proposals that mandate designers to disclose bots [100]. In conclusion, regulation can ensure that explainability and accountability coexist in algorithmic decision-making.…”
Section: The Necessity For Hard Regulationmentioning
confidence: 92%
“…Explainable decision-making algorithms, for instance, may be required to declare their lack of agency, intentionality, and rationality alongside their explanations so that people are not influenced to hold them accountable. Such an approach would be similar to the existing proposals that mandate designers to disclose bots [100]. In conclusion, regulation can ensure that explainability and accountability coexist in algorithmic decision-making.…”
Section: The Necessity For Hard Regulationmentioning
confidence: 92%
“…The Food and Drug Administration (FDA) guidelines for AI systems integrated into software as a medical device (SaMD) has a strong emphasis on functional performance, clearly not taking product performance as a given [64]. The draft AI Act in the EU includes requirements for pre-marketing controls to establish products' safety and performance, as well as quality management for high risk systems [190]. These mentions suggest that functionality is not always ignored outright.…”
Section: The Functionality Assumptionmentioning
confidence: 99%
“…This includes what Stark and Hutson call "physiognomic artificial intelligence," which attempts to infer or create hierarchies about personal characteristics from data about their physical appearance [179]. Criticizing the EU Act's failure to address this inconvenient truth, Veale and Borgesius [190] pointed out that "those claiming to detect emotion use oversimplified, questionable taxonomies; incorrectly assume universality across cultures and contexts; and risk '[taking] us back to the phrenological past' of analysing character traits from facial structures. "…”
Section: Failure Taxonomymentioning
confidence: 99%
See 1 more Smart Citation
“…The European Union (EU) Artificial Intelligence (AI) Act proposes a banning of AI systems that "manipulate persons through subliminal techniques or exploit the fragility of vulnerable individuals, and could potentially harm the manipulated individual or third person" (EU Commission 2021). The EU AI ACT proposes two practices for regulating manipulation (Veale and Borgesius 2021); namely, prohibiting:…”
Section: Introductionmentioning
confidence: 99%