2020
DOI: 10.28995/2075-7182-2020-19-13-32
|View full text |Cite
|
Sign up to set email alerts
|

Word2vec Not Dead: Predicting Hypernyms of Co-Hyponyms Is Better Than Reading Definitions

Abstract: Expert-built lexical resources are known to provide information of good quality for the cost of low coverage. This property limits their applicability in modern NLP applications. Building descriptions of lexical-semantic relations manually in sufficient volume requires a huge amount of qualified human labour. However, given some initial version of a taxonomy is already built, automatic or semi-automatic taxonomy enrichment systems can greatly reduce the required efforts. We propose and experiment with two appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 3 publications
(7 reference statements)
0
5
0
Order By: Relevance
“…assigned one or more hypernyms [37] or ranked all hypernyms by suitability for a particular word [20]. They also used a range of additional resources, such as Wiktionary, dictionaries, additional corpora [2]. Interestingly, only one of the well-performing models [78] used context-informed embeddings (BERT [24]) or external tools such as online Machine Translation (MT) and search engines (the best-performing model denoted as Yuriy in the workshop description paper).…”
Section: Prior Art On Taxonomy Enrichmentmentioning
confidence: 99%
See 2 more Smart Citations
“…assigned one or more hypernyms [37] or ranked all hypernyms by suitability for a particular word [20]. They also used a range of additional resources, such as Wiktionary, dictionaries, additional corpora [2]. Interestingly, only one of the well-performing models [78] used context-informed embeddings (BERT [24]) or external tools such as online Machine Translation (MT) and search engines (the best-performing model denoted as Yuriy in the workshop description paper).…”
Section: Prior Art On Taxonomy Enrichmentmentioning
confidence: 99%
“…Another option is to train word2vec embeddings from scratch and cast the task as a classification problem [37]. Some participants compare the approach based on XLM-R model [19] with the word2vec "hypernyms of co-hyponyms" method [2]. It considers nearest neighbours as co-hyponyms and takes their hypernyms as candidate synsets.…”
Section: Word Vector Representations For Taxonomiesmentioning
confidence: 99%
See 1 more Smart Citation
“…They cast the task as a classification problem where words need to be assigned one or more hypernyms (Kunilovskaya et al, 2020) or ranked all hypernyms by suitability for a particular word (Dale, 2020). They also used a range of additional resources, such as Wiktionary (Arefyev et al, 2020), dictionaries, additional corpora. Interestingly, only one of the well-performing models (Tikhomirov et al, 2020) used context-informed embeddings (BERT).…”
Section: Related Workmentioning
confidence: 99%
“…MorphoBabushka (alvadia, maxfed, joystick) [2] This team used the following pipeline. First, they retrieved nearest neighbors for the target word from word2vec "SkipGram with Negative Sampling" model trained on Librusec book collection [1] and search for their direct and indirect hypernyms in RuWordNet.…”
Section: Yuriymentioning
confidence: 99%