Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512045
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

Abstract: Recently, there has been growing interest in the ability of Transformerbased models to produce meaningful embeddings of text with several applications, such as text similarity. Despite significant progress in the field, the explanations for similarity predictions remain challenging, especially in unsupervised settings. In this work, we present an unsupervised technique for explaining paragraph similarities inferred by pre-trained BERT models. By looking at a pair of paragraphs, our technique identifies importa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 50 publications
(36 reference statements)
0
1
0
Order By: Relevance
“…Ad category C): Moreover, local non-rule-based XAI approaches have been proposed to reason language model predictions. In Malkiel et al (2022), saliency maps are used to reason similarity predictions of online consumer reviews by a BERT-based model, aiming to highlight important word-pairs for specific similarity predictions. Moreover, different visualizations with respect to neuron activations in the hidden layers have been applied to reason specific language model predictions (Brasoveanu & Andonie, 2022).…”
Section: Ad Category B)mentioning
confidence: 99%
“…Ad category C): Moreover, local non-rule-based XAI approaches have been proposed to reason language model predictions. In Malkiel et al (2022), saliency maps are used to reason similarity predictions of online consumer reviews by a BERT-based model, aiming to highlight important word-pairs for specific similarity predictions. Moreover, different visualizations with respect to neuron activations in the hidden layers have been applied to reason specific language model predictions (Brasoveanu & Andonie, 2022).…”
Section: Ad Category B)mentioning
confidence: 99%