2023
DOI: 10.54364/aaiml.2023.1143
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Explanation Methods of AI - CNNs in Image Classification Tasks with Reference-based and No-reference Metrics

Abstract: The most popular methods in AI-machine learning paradigm are mainly black boxes. This is why explanation of AI decisions is of emergency. Although dedicated explanation tools have been massively developed, the evaluation of their quality remains an open research question. In this paper, we generalize the methodologies of evaluation of post-hoc explainers of CNNs’ decisions in visual classification tasks with reference and no-reference based metrics. We apply them on our previously developed explainers (FEM1 , … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 22 publications
(55 reference statements)
0
1
0
Order By: Relevance
“…Gaze fixation 30,31 recordings are utilized to create GFDMs for directing models on their recognition task. A GFDM can identify the areas of an image relevant to humans.…”
Section: Gfdms For Ground Truthmentioning
confidence: 99%
“…Gaze fixation 30,31 recordings are utilized to create GFDMs for directing models on their recognition task. A GFDM can identify the areas of an image relevant to humans.…”
Section: Gfdms For Ground Truthmentioning
confidence: 99%