Proceedings of the 3rd Workshop on Evaluating Vector Space Representations For 2019
DOI: 10.18653/v1/w19-2010
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Morphological Embeddings for

Abstract: This paper evaluates morphology-based embeddings for English and Russian languages. Despite the interest and introduction of several morphology-based word embedding models in the past and acclaimed performance improvements on word similarity and language modeling tasks, in our experiments, we did not observe any stable preference over two of our baseline models-SkipGram and FastText. The performance exhibited by morphological embeddings is the average of the two baselines mentioned above.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…Neural network-based language models produce outstanding results in a variety of NLP downstream tasks, including document categorization, named entity recognition, and machine translation (Romanov and Khusainova, 2019;Kalyan et al, 2021;Kapočiūtė-Dzikienė et al, 2021). Although the using such new-generation language models in calculating similarity scores is not very common, they have been applied in a few studies as of 2018.…”
Section: Methods Combining Word and Semantic Informationmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural network-based language models produce outstanding results in a variety of NLP downstream tasks, including document categorization, named entity recognition, and machine translation (Romanov and Khusainova, 2019;Kalyan et al, 2021;Kapočiūtė-Dzikienė et al, 2021). Although the using such new-generation language models in calculating similarity scores is not very common, they have been applied in a few studies as of 2018.…”
Section: Methods Combining Word and Semantic Informationmentioning
confidence: 99%
“…• Neural network-based language models produce outstanding results in a variety of NLP downstream tasks, including document categorization, named entity recognition, and machine translation (Romanov and Khusainova, 2019;Kalyan et al, 2021;Kapočiūtė-Dzikienė et al, 2021). Few studies on the application of neural network-based models to the calculation of similarity scores have shown that they are more effective than other semantic-based methods (Ogunleye et al, 2018;Duan et al, 2019;Yong et al, 2019).…”
Section: Future Directions For Similarity Score Computationmentioning
confidence: 99%
“…This study presents the application of word embedding methods to effectively use rich information in the abstract and title of project proposals in Turkish. Neural network-based word embedding methods (e.g., FastText, BERT) are used in numerous NLP tasks successfully (Romanov and Khusainova, 2019; Kapočiūtė-Dzikienė et al ., 2021; Kalyan et al ., 2021). However, a project proposal grouping research based on such next-generation representation approaches has not yet been reported to the best of our knowledge.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Neural network technologies for word embedding have recently shown remarkable results in NLP tasks (Romanov and Khusainova, 2019; Kapočiūtė-Dzikienė et al ., 2021; Kalyan et al ., 2021). However, a project proposal grouping study based on high-performance neural network-based textual feature extraction techniques has not yet been reported to the best of our knowledge.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation