2022
DOI: 10.48550/arxiv.2204.11073
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

Oren Barkan,
Edan Hauon,
Avi Caciularu
et al.

Abstract: Transformer-based language models significantly advanced the stateof-the-art in many linguistic tasks. As this revolution continues, the ability to explain model predictions has become a major area of interest for the NLP community. In this work, we present Gradient Self-Attention Maps (Grad-SAM) -a novel gradient-based method that analyzes self-attention units and identifies the input elements that explain the model's prediction the best. Extensive evaluations on various benchmarks show that Grad-SAM obtains … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 27 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?