2024
DOI: 10.1080/10447318.2024.2323263
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability is in the Eye of the Beholder: Human Versus Artificial Classification of Image Segments Generated by Humans Versus XAI

Romy Müller,
Marius Thoß,
Julian Ullrich
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 57 publications
0
2
0
Order By: Relevance
“…Thus, our tasks do not easily scale to more complex categorization demands. Some of these implementation issues could be solved by minor changes to the procedure, such as performing verbal instead of manual labelling (e.g., [56]) or presenting a category label beforehand and then having participants indicate whether it matches the image or not (e.g., [67,68]).…”
Section: Particularities Of Task Implementationmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, our tasks do not easily scale to more complex categorization demands. Some of these implementation issues could be solved by minor changes to the procedure, such as performing verbal instead of manual labelling (e.g., [56]) or presenting a category label beforehand and then having participants indicate whether it matches the image or not (e.g., [67,68]).…”
Section: Particularities Of Task Implementationmentioning
confidence: 99%
“…[45]). To avoid the pitfalls of inferring the suitability of CNN attention maps from their similarity to humans, external criteria are needed, such as whether they are interpretable and support human task performance [68,77,78].…”
Section: Implications For Practical Application and Perpectives For F...mentioning
confidence: 99%
“…Several studies reported human performance differences depending on attribution methods. These effects were most prominent when participants had to classify image segments that were generated by different attribution methods (Biessmann & Refiano, 2019;John et al, 2021;Lu et al, 2021;Müller, Thoß, et al, 2024;Selvaraju et al, 2017). Obviously, this task depends on the attributions as it requires a classification of highly restricted information.…”
Section: Xai-related Factorsmentioning
confidence: 99%
“…Participants strongly benefitted from one method of generating saliency maps, whereas two other methods even impaired performance as they either highlighted too few or too many positions. Moreover, the superiority of attribution methods can depend on other factors such as image type (Müller, Thoß, et al, 2024). When the class was defined by singular objects, participants could more easily classify segments generated by XRAI than Grad-CAM, while the reverse was true for complex scenes.…”
Section: Xai-related Factorsmentioning
confidence: 99%
See 1 more Smart Citation