2023
DOI: 10.3389/frai.2023.1020592
|View full text |Cite
|
Sign up to set email alerts
|

The assessment list for trustworthy artificial intelligence: A review and recommendations

Abstract: In July 2020, the European Commission's High-Level Expert Group on AI (HLEG-AI) published the Assessment List for Trustworthy Artificial Intelligence (ALTAI) tool, enabling organizations to perform self-assessments of the fit of their AI systems and surrounding governance to the “7 Principles for Trustworthy AI.” Prior research on ALTAI has focused primarily on specific application areas, but there has yet to be a comprehensive analysis and broader recommendations aimed at proto-regulators and industry practit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…For example, in Germany, a patient must give informed consent to the use of AI in the process of his diagnosis and treatment, which we believe is a good practice. Also, rules that should be fulfilled by the AI-based system, like the Assessment List for Trustworthy Artificial Intelligence (ALTAI) [207][208][209][210], have been formulated. In [211,212], 10 ethical risk points (ERPs) important to institutions, policymakers, teachers, students, and patients, including potential impacts on design, content, delivery, and AI-human communication in the field of AI and metaverse-based medical education, were defined.…”
Section: Discussionmentioning
confidence: 99%
“…For example, in Germany, a patient must give informed consent to the use of AI in the process of his diagnosis and treatment, which we believe is a good practice. Also, rules that should be fulfilled by the AI-based system, like the Assessment List for Trustworthy Artificial Intelligence (ALTAI) [207][208][209][210], have been formulated. In [211,212], 10 ethical risk points (ERPs) important to institutions, policymakers, teachers, students, and patients, including potential impacts on design, content, delivery, and AI-human communication in the field of AI and metaverse-based medical education, were defined.…”
Section: Discussionmentioning
confidence: 99%
“…In addition to the challenges posed by the “black box” issue leading to non-interpretable problems, biases and fairness, technical safety, preservation of human autonomy, privacy, and data security are significant AI ethics concerns within this field [ 20 , 180 ]. The development of trustworthy AI in healthcare has become a crucial responsibility worldwide [ 181 ]. For instance, the European Commission has enacted both the “Ethics Guidelines for Trustworthy AI” and the “Artificial Intelligence Act” [ 182 , 183 ].…”
Section: Discussionmentioning
confidence: 99%
“…For example, in Germany, a patient must give informed consent to the use of AI in the process of his diagnosis and treatment, which we believe is a good practice. Also, rules that should be fulfilled by the AI-based system like the Assessment List for Trustworthy Artificial Intelligence (ALTAI) [173][174][175], were formulated. In [163,176] 10 ethical risk points (ERP) important to institutions, policymakers, teachers, students, and patients, including potential impacts on design, content, delivery, and AI-human-communication in the field of AI and Metaversebased medical education were defined.…”
Section: Discussionmentioning
confidence: 99%