2023
DOI: 10.1007/978-981-19-7615-5_37
|View full text |Cite
|
Sign up to set email alerts
|

Effect of GloVe, Word2Vec and FastText Embedding on English and Hindi Neural Machine Translation Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…These techniques are based on the hypothesis that words in similar contexts tend to have similar meanings. To this end, word embedding models such as Word2Vec, GloVe and FastText learn to assign similar vectors to words that have similar meanings or are used in similar contexts [32,33]. This means that words with similar semantic or syntactic properties have similar vector representations, and their embedding space distances reflect their relationships [34].…”
Section: Text Representationmentioning
confidence: 99%
“…These techniques are based on the hypothesis that words in similar contexts tend to have similar meanings. To this end, word embedding models such as Word2Vec, GloVe and FastText learn to assign similar vectors to words that have similar meanings or are used in similar contexts [32,33]. This means that words with similar semantic or syntactic properties have similar vector representations, and their embedding space distances reflect their relationships [34].…”
Section: Text Representationmentioning
confidence: 99%