Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1627
|View full text |Cite
|
Sign up to set email alerts
|

What Does This Word Mean? Explaining Contextualized Embeddings with Natural Language Definition

Abstract: Contextualized word embeddings have boosted many NLP tasks compared with traditional static word embeddings. However, the word with a specific sense may have different contextualized embeddings due to its various contexts. To further investigate what contextualized word embeddings capture, this paper analyzes whether they can indicate the corresponding sense definitions and proposes a general framework that is capable of explaining word meanings given contextualized word embeddings for better interpretation. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 17 publications
0
15
0
Order By: Relevance
“…We rank definitions by applying the MBRR plus cosine similarity strategy described in Section 3.2.3. Chang and Chen (2019) achieves, in most cases, the highest recovery rate. However, with k = 1, which is the most realistic case, Gen-CHA S outperforms the competitor by 4.6 points when macro-averaging on senses, i.e.…”
Section: Methodsmentioning
confidence: 93%
See 4 more Smart Citations
“…We rank definitions by applying the MBRR plus cosine similarity strategy described in Section 3.2.3. Chang and Chen (2019) achieves, in most cases, the highest recovery rate. However, with k = 1, which is the most realistic case, Gen-CHA S outperforms the competitor by 4.6 points when macro-averaging on senses, i.e.…”
Section: Methodsmentioning
confidence: 93%
“…Recent approaches have explored the use of large-scale pre-trained models to score definitions with respect to a usage context. For example, Chang and Chen (2019) proposed to recast DM as a definition ranking problem. A similar idea was applied in WSD by Huang et al (2019), leading to state-of-the-art results.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations