2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897347
|View full text |Cite
|
Sign up to set email alerts
|

I Saw: A Self-Attention Weighted Method for Explanation of Visual Transformers

Abstract: Recently, visual transformers have shown promising results in tasks such as image classification, segmentation, object detection, etc. The explanation of their decision remains a challenge. This paper focuses on exploiting self-attention for an explanation. We propose a generalized interpretation of the transformers i.e model agnostic but class-specific explanations. The main principle is in the use and weighting self-attention maps of a visual transformer. To evaluate it, we use the popular hypothesis that an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
references
References 12 publications
0
0
0
Order By: Relevance