Proceedings of the 28th International Conference on Intelligent User Interfaces 2023
DOI: 10.1145/3581641.3584075
|View full text |Cite
|
Sign up to set email alerts
|

Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations

Abstract: Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 57 publications
0
11
0
Order By: Relevance
“…Over the past decade, the field of XAI has witnessed numerous studies being conducted to measure the efficacy of various explanation methods for increasing the transparency of ML systems across multiple application domains, such as healthcare [8,16,18,60], finance [9,13,24], and law enforcement [72,79,87]. Along with making black-box ML models more transparent, XAI methods have also aimed to make these systems more understandable and trustworthy [8,49,54]. Explanation methods have been categorised as model-specific or model-agnostic based on the degree of specificity [2,5].…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
See 3 more Smart Citations
“…Over the past decade, the field of XAI has witnessed numerous studies being conducted to measure the efficacy of various explanation methods for increasing the transparency of ML systems across multiple application domains, such as healthcare [8,16,18,60], finance [9,13,24], and law enforcement [72,79,87]. Along with making black-box ML models more transparent, XAI methods have also aimed to make these systems more understandable and trustworthy [8,49,54]. Explanation methods have been categorised as model-specific or model-agnostic based on the degree of specificity [2,5].…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
“…Data-centric explanations, on the other hand, aim to find insights from the training data to justify the behaviour of prediction models [5]. Recent works have shown that data-centric explanations can justify the failure of ML models by revealing bias, inconsistencies and quality of the training data [5,7,8]. Examples of data-centric explanation approaches include summarisation of the training data using descriptive statistics, disclosing the bias in training data by showing the distribution of the data across various demographic parameters and revealing the potential issues that can impact the data quality [5,7,8].…”
Section: Xai Methods For ML Systemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Research Study 1 -In his first research study [7], the author designed and developed an explanation dashboard for monitoring the risk of diabetes onset. This interactive dashboard provides explanations for a diabetes risk prediction model by combining data-centric, feature importance and example-based local explanations.…”
Section: Current Statusmentioning
confidence: 99%