Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.109
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying the Contextualization of Word Representations with Semantic Class Probing

Abstract: Pretrained language models achieve state-ofthe-art results on many NLP tasks, but there are still many open questions about how and why they work so well. We investigate the contextualization of words in BERT. We quantify the amount of contextualization, i.e., how well words are interpreted in context, by studying the extent to which semantic classes of a word can be inferred from its contextualized embedding. Quantifying contextualization helps in understanding and utilizing pretrained language models. We sho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…However, there are studies that investigate the LMs' knowledge about selectional preferences of verbs and semantic types. Their findings suggest that contextual language models encode information about the selectional preferences of verbs (Metheniti et al, 2020;Li et al, 2021;Pedinotti et al, 2021) and the semantic type of the nouns in general (Zhao et al, 2020). Similar to our study, Zhao et al (2020) and trained classifiers using the extracted representations of the BERT model for their tasks and these classifiers achieved high accuracy scores.…”
Section: Selectional Preference and Semantic Type Knowledge Of Lmssupporting
confidence: 78%
“…However, there are studies that investigate the LMs' knowledge about selectional preferences of verbs and semantic types. Their findings suggest that contextual language models encode information about the selectional preferences of verbs (Metheniti et al, 2020;Li et al, 2021;Pedinotti et al, 2021) and the semantic type of the nouns in general (Zhao et al, 2020). Similar to our study, Zhao et al (2020) and trained classifiers using the extracted representations of the BERT model for their tasks and these classifiers achieved high accuracy scores.…”
Section: Selectional Preference and Semantic Type Knowledge Of Lmssupporting
confidence: 78%
“…Few-shot learners in NLP. Significant progress has been made in developing (Devlin et al, 2019;Peters et al, 2018;Brown et al, 2020), understanding (Liu et al, 2019;Tenney et al, 2019;Belinkov and Glass, 2019;Hewitt and Liang, 2019;Hewitt and Manning, 2019;Zhao et al, 2020a;Rogers et al, 2020), and utilizing (Houlsby et al, 2019;Zhao et al, 2020b;Brown et al, 2020;Li and Liang, 2021;Schick and Schütze, 2021a;Lester et al, 2021;Mi et al, 2021a) PLMs. Brown et al (2020), Schütze (2021a), andLiu et al (2021b) show that PLMs can serve as data-efficient few-shot learners, through priming or prompting (Liu et al, 2021a).…”
Section: Related Workmentioning
confidence: 99%
“…Chronis and Erk (2020) studied the similarity and relatedness of contextual representations in the embedding spaces of BERT, while Brunner et al (2019) studied how identifiable the intermediate representations of BERT are with respect to the input. Zhao et al (2020) quantified the contextual knowledge of BERT and Zhao et al (2021) analyzed the embedding spaces of BERT in order to quantify the non-linearity of its layers.…”
Section: Related Workmentioning
confidence: 99%