2021
DOI: 10.3389/fhumd.2021.673104
|View full text |Cite
|
Sign up to set email alerts
|

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

Abstract: Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 59 publications
0
18
0
Order By: Relevance
“…The findings show that several socio-environmental factors, which impact model performance, could not be foreseen during development. Our previous work within Z-inspection ® (Zicari, et al, 2021b), makes a similar finding when assessing a case where AI is used to detect sound patterns of callers to an emergency line, to indicate the probability of a cardiac arrest situation to an operator. The descriptive statistics of the modeling solution show improved ability for detection; however, a further examination of the collaborative system shows that operator trust toward the system is low (Blomberg et al, 2021).…”
Section: Related Workmentioning
confidence: 73%
See 1 more Smart Citation
“…The findings show that several socio-environmental factors, which impact model performance, could not be foreseen during development. Our previous work within Z-inspection ® (Zicari, et al, 2021b), makes a similar finding when assessing a case where AI is used to detect sound patterns of callers to an emergency line, to indicate the probability of a cardiac arrest situation to an operator. The descriptive statistics of the modeling solution show improved ability for detection; however, a further examination of the collaborative system shows that operator trust toward the system is low (Blomberg et al, 2021).…”
Section: Related Workmentioning
confidence: 73%
“…Our approach uses a holistic process, called Z-inspection ® (Zicari, et al, 2021b), to help assisting engineers in the early co-design of an AI system to satisfy the requirement for Trustworthy AI as defined by the High-Level Expert Group on AI (AI HLEG) set up by the European Commission. One of the key features of the Z-inspection ® is the involvement of a multidisciplinary team of experts co-creating together with the AI engineers, their managers to ensure that the AI system is trustworthy.…”
Section: Trustworthy Artificial Intelligence Co-designmentioning
confidence: 99%
“…Going from practice to principles starts with cases. The two discussed here were produced by a team of philosophers, computer scientists, lawyers, and doctors organized out of the Frankfurt Big Data Lab in Germany (Zicari et al 2021a, Zicari et al 2021b. Working with algorithmic startup companies, we collaboratively explore their development experiences, with attention split between ethics, technology, law, and medicine (Brusseau 2020).…”
Section: Casesmentioning
confidence: 99%
“…If blue lips are mentioned, the probability also rises. If both, the alarm illuminates (Zicari et al 2021a). However, beyond that and a few similar anecdotes, there was no human-oriented discussion of the AI process, nothing that would make sense to a doctor.…”
Section: Explainability or Performance?mentioning
confidence: 99%
See 1 more Smart Citation