2019
DOI: 10.1016/j.procs.2019.08.153
|View full text |Cite
|
Sign up to set email alerts
|

Word2Vec Model Analysis for Semantic Similarities in English Words

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
61
0
8

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(72 citation statements)
references
References 3 publications
3
61
0
8
Order By: Relevance
“…These correlation scores can be found in Table 8 in the Appendix, they are all between .63 and .66. This is comparable to the correlations between .60 and .67 that were found using word2vec by Jatnika et al [2019], which is already better than the model trained by Google they used as comparison [Mikolov et al, 2013b]. As may be expected with a smaller corpus, the scores for the data set of new articles are slightly lower (between .59 and .64), but still reasonable.…”
Section: Methodssupporting
confidence: 75%
“…These correlation scores can be found in Table 8 in the Appendix, they are all between .63 and .66. This is comparable to the correlations between .60 and .67 that were found using word2vec by Jatnika et al [2019], which is already better than the model trained by Google they used as comparison [Mikolov et al, 2013b]. As may be expected with a smaller corpus, the scores for the data set of new articles are slightly lower (between .59 and .64), but still reasonable.…”
Section: Methodssupporting
confidence: 75%
“…In the same direction, the study in [22] investigated the similarity of English words using the Word2Vec representation technique. The model was trained on English Wikipedia pages, and the cosine similarity method was employed for determining the similarity values.…”
Section: Utilization Of Word2vec In Osnsmentioning
confidence: 99%
“…Cosine similarity is a real number between -1 to 1. If the cosine similarity between two words is close to -1, then the words tend to have opposite meaning; if it is close to 1, they tend to have nearly the same meaning [30]. Equation 1shows the formula for cosine similarity, where W and X are word embedding vectors, S(W  , X  ) denotes the cosine similarity between W  and X  , k means for each word and n is the number of words embedded.…”
Section: B Cosine-similarity-based Abstractimilarity Using Word2vecmentioning
confidence: 99%