2020
DOI: 10.1186/s12911-020-01276-x
|View full text |Cite
|
Sign up to set email alerts
|

A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare

Abstract: Background There is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
38
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(41 citation statements)
references
References 31 publications
1
38
0
Order By: Relevance
“…While AI algorithms have been validated and have been shown to have similar or higher accuracy than humans, recent studies of AI deployment in clinical settings report that professional autonomy, workflow, and local sociotechnical factors have impacts on how accuracy is perceived and used in clinical practice [ 24 , 43 , 45 - 47 , 50 - 54 ]. Bruun et al [ 24 ] found that overall performance was positively impacted among clinicians using an AI-prediction tool for assessing progression in early stage dementia and that clinicians’ professional autonomy impacts the use of medical AI in situated clinical practice.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While AI algorithms have been validated and have been shown to have similar or higher accuracy than humans, recent studies of AI deployment in clinical settings report that professional autonomy, workflow, and local sociotechnical factors have impacts on how accuracy is perceived and used in clinical practice [ 24 , 43 , 45 - 47 , 50 - 54 ]. Bruun et al [ 24 ] found that overall performance was positively impacted among clinicians using an AI-prediction tool for assessing progression in early stage dementia and that clinicians’ professional autonomy impacts the use of medical AI in situated clinical practice.…”
Section: Discussionmentioning
confidence: 99%
“…The AI studied has limitations, because only the random forest ML-based algorithm was evaluated with electrophysiologists. These types of methods are commonly used in medical applications [ 21 , 74 , 75 ] because of their high classification accuracy and capabilities for handling data with imbalanced classes [ 50 ] while providing easily accessible, if limited, global intelligibility through the visualization and ranking of parameter importance [ 55 ]. This work will benefit from being validated in a large-scale multicenter study with higher diversity in participating electrophysiologists and workflows.…”
Section: Discussionmentioning
confidence: 99%
“…Another UI was developed by Cheng et al (2019) to support end users in understanding the algorithms for making university-admission decisions; the UI was found to improve the users' comprehension of the underlying algorithm. Barda et al (2020) developed an explanatory display for predictions based on a pediatric intensive care unit in-hospital mortality risk model, the users found the display useful.…”
Section: Explainable Artificial Intelligence and Localmentioning
confidence: 99%
“…Despite active research in this context, there is a lack of user evaluation studies in the XAI field regarding the perception and effects of explanations on the targeted stakeholders (van der Waa et al, 2021). Moreover, different explanation goals and information needs, as well as varying backgrounds and/or expertise, can influence users' perceptions of XAI-based explanations, which further underlines the relevance of evaluations with targeted users (Barda et al, 2020;van der Waa et al, 2021). More specifically, we have identified two interconnected research gaps.…”
Section: Introductionmentioning
confidence: 97%
“…The Prescience model uses SHAP attribution to analyze preoperative factors and in-surgery parameters. In another study [ 20 ], a framework was proposed for the design of an explanatory display to interpret the prediction of a pediatric intensive care unit in-hospital mortality risk model. The explanation was displayed in a user-centric manner and established using Shapely values.…”
Section: Introductionmentioning
confidence: 99%