2022
DOI: 10.1007/978-3-031-12807-3_4
|View full text |Cite
|
Sign up to set email alerts
|

Methods and Metrics for Explaining Artificial Intelligence Models: A Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Drusen has slightly lower sensitivity, while CNV has the highest specificity (>0.97), suggesting strong performance in distinguishing its class. Because it is hard to understand how the CNN model predicted the output, XAI techniques are used to explain it [32]. The testing images are on the left, and each explanation has a transparent grey background (see .…”
Section: Statistical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Drusen has slightly lower sensitivity, while CNV has the highest specificity (>0.97), suggesting strong performance in distinguishing its class. Because it is hard to understand how the CNN model predicted the output, XAI techniques are used to explain it [32]. The testing images are on the left, and each explanation has a transparent grey background (see .…”
Section: Statistical Resultsmentioning
confidence: 99%
“…The proposed lightweight model reported in Table 1 provides a concise overview of the CNN architecture and Figure 4 shows the graphical result. The model holds only five convolution layers to perform convolution operations on the input image, with increasing filter depths (16,32,64,128,256) to capture hierarchical features. Each convolution layer is followed by max-pooling layers to down-sample the feature maps, aiding in information compression.…”
Section: Cnn Modelmentioning
confidence: 99%
“…Since it was difficult to directly interpret the mathematical behavior of the CNN model, XAI techniques were applied to the model [60]. Four results were illustrated by the SHAP results for each category (cyst, normal, stone, tumor).Testing images are shown on the left with a transparent gray background behind each explanation.…”
Section: Descriptive Analysis From Xai 521 Shapmentioning
confidence: 99%
“…Furthermore, the integration of models and methods from Explainable Artificial Intelligence (Banerjee and Barnwal, 2023), especially in the processes that involve Machine Learning algorithms (e.g., argument mining or argument generation) will contribute to the transparency, interpretability and understandability of the outputs of the Web of Debates tools and applications and to the establishment of trust with their users. Computational argumentation has already proved to be a very http://en.wikipedia.org/wiki/Fact_checking https://en.wikipedia.org/wiki/List_of_fact-checking_websites useful tool for developing explainable systems (Vassiliades et al, 2021), while the recent launch of the International Workshop on Argumentation for Explainable AI shows that this is an active area of interest for researchers in computational argumentation.…”
Section: Ethical Issuesmentioning
confidence: 99%