2021
DOI: 10.1007/978-3-030-79725-6_4
|View full text |Cite
|
Sign up to set email alerts
|

Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems

Abstract: We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…A two-sided t-test of the TCAV scores should be used to evaluate if the null hypothesis is rejected [63]. Trustworthy Explainability Acceptance metric quantitatively measures the distance between explanations produced by the AI system and the explanations provided by medical experts [61]. Singla et al [129] utilized three metrics for the evaluation of the counterfactual explanations for chest X-ray classification : Frechet Inception Distance (FID), Counterfactual Validity (CV) and Foreign Object Preservation (FOP).…”
Section: Other Methodsmentioning
confidence: 99%
“…A two-sided t-test of the TCAV scores should be used to evaluate if the null hypothesis is rejected [63]. Trustworthy Explainability Acceptance metric quantitatively measures the distance between explanations produced by the AI system and the explanations provided by medical experts [61]. Singla et al [129] utilized three metrics for the evaluation of the counterfactual explanations for chest X-ray classification : Frechet Inception Distance (FID), Counterfactual Validity (CV) and Foreign Object Preservation (FOP).…”
Section: Other Methodsmentioning
confidence: 99%
“…Numerous studies have explored trust modeling and its application in diverse scenarios. For instance, a trust framework proposed by Reference 11, grounded in measurement theory, finds applicability across various domains, including crime detection, 12 social networks, 13,14 the food-energy sector, [15][16][17][18][19][20] healthcare, 21,22 edge computing, [23][24][25][26][27] quantum computing, 28 and beyond. Recent work 21,[29][30][31] has highlighted trust's utility as an acceptance criterion for artificial intelligence algorithms.…”
Section: Role Of Trustmentioning
confidence: 99%
“…In [24], a measurement theory-based trust management framework was proposed for online social communities. This framework has since been proven to facilitate decisionmaking in multiple areas such as online social networks [25], the food-energy-water nexus [44], crime detection [29], and cancer diagnosis [53]. It is a very flexible yet robust framework that can be adapted to different scenarios to capture trust.…”
Section: Trust Management Frameworkmentioning
confidence: 99%
“…To address the concerns about measuring the different aspects of trustworthiness, the metrics of acceptance [51] and fairness [52] were proposed to facilitate environmental decision making, an explainability metric [53] was proposed to interpret AI medical diagnosis systems, and a trustability metric [54] to assess trust in cloud computing. This paper presents an extended version of the trust management framework that includes the trustability metric, which helps to take action when an external attack or an internal event occurs in an autonomous device equipped with sensors or in a service running on the cloud.…”
Section: Introductionmentioning
confidence: 99%