Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021) 2021
DOI: 10.18653/v1/2021.repl4nlp-1.19
|View full text |Cite
|
Sign up to set email alerts
|

Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

Abstract: One of the long-standing challenges in lexical semantics consists in learning representations of words which reflect their semantic properties. The remarkable success of word embeddings for this purpose suggests that highquality representations can be obtained by summarizing the sentence contexts of word mentions. In this paper, we propose a method for learning word representations that follows this basic strategy, but differs from standard word embeddings in two important ways. First, we take advantage of con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
(26 reference statements)
0
0
0
Order By: Relevance
“…They have used BERT-base final layer's [CLS] token embedding as the corresponding embedding of an input document. Wang et al, [55] have argued that BERT contextual embedding can be improved by adding topical information to it. In their study, BERT embedding was derived from topics in the corpus.…”
Section: Related Workmentioning
confidence: 99%
“…They have used BERT-base final layer's [CLS] token embedding as the corresponding embedding of an input document. Wang et al, [55] have argued that BERT contextual embedding can be improved by adding topical information to it. In their study, BERT embedding was derived from topics in the corpus.…”
Section: Related Workmentioning
confidence: 99%