Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.415
|View full text |Cite
|
Sign up to set email alerts
|

Exploring BERT’s Sensitivity to Lexical Cues using Tests from Semantic Priming

Abstract: Models trained to estimate word probabilities in context have become ubiquitous in natural language processing. How do these models use lexical cues in context to inform their word probabilities? To answer this question, we present a case study analyzing the pre-trained BERT model with tests informed by semantic priming. Using English lexical stimuli that show priming in humans, we find that BERT too shows "priming," predicting a word with greater probability when the context includes a related word versus an … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
34
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(45 citation statements)
references
References 28 publications
2
34
0
Order By: Relevance
“…Some scattered work has explored more semantic types of attractors for testing LMs-in particular, there is work looking at whether presence of certain context words will prime corresponding targets in context. Such work has experimented with contextual factors like distance between prime and target (Kassner and Schütze, 2020), as well as contextual constraint (Misra et al, 2020). We build on this existing work with a more systematic exploration of impacts of different types of attractors, and with a more targeted goal of testing models' robustness in processing new facts from context.…”
Section: Related Workmentioning
confidence: 99%
“…Some scattered work has explored more semantic types of attractors for testing LMs-in particular, there is work looking at whether presence of certain context words will prime corresponding targets in context. Such work has experimented with contextual factors like distance between prime and target (Kassner and Schütze, 2020), as well as contextual constraint (Misra et al, 2020). We build on this existing work with a more systematic exploration of impacts of different types of attractors, and with a more targeted goal of testing models' robustness in processing new facts from context.…”
Section: Related Workmentioning
confidence: 99%
“…The analysis of semantic capabilities in LMs includes studies on negative polarity in LSTM LMs (Marvin and Linzen, 2018;Jumelet and Hupkes, 2018), reasoning based on higher-order linguistic skill (Talmor et al, 2019), arithmetic and compositional semantics (Staliūnaitė and Iacobacci, 2020), stereotypic tacit assumptions and lexical priming (Misra et al, 2020;Weir et al, 2020). Many of these studies look at recent PLMs and draw mixed conclusions about the level of semantics encoded by these models.…”
Section: Related Workmentioning
confidence: 99%
“…We made use of synthetically modified language data that accentuated, or weakened, evidence for certain linguistic processes. The goal of such modification in our work is quite similar both to work which attempts to remove targeted linguistic knowledge in model representations (e.g., Ravfogel et al, 2020;Elazar et al, 2021) investigates the representational space of models via priming (Prasad et al, 2019;Misra et al, 2020).…”
Section: Related Workmentioning
confidence: 99%