Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1062
|View full text |Cite
|
Sign up to set email alerts
|

Querying Word Embeddings for Similarity and Relatedness

Abstract: Word embeddings obtained from neural network models such as Word2Vec Skipgram have become popular representations of word meaning and have been evaluated on a variety of word similarity and relatedness norming data. Skipgram generates a set of word and context embeddings, the latter typically discarded after training. We demonstrate the usefulness of context embeddings in predicting asymmetric association between words from a recently published dataset of production norms (Jouravlev and McRae, 2016). Our findi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 24 publications
4
15
0
Order By: Relevance
“…Phone embeddings are not able to capture co-occurrence restrictions among consonants such as homorganic nasal-voiced obstruent clusters, voiced obstruent-lateral cluster and homorganic nasal-sibilant-voiceless stop clusters. This observation is similar to one reported in the distributed semantic literature that word embeddings capture similarity better than relatedness (Asr et al, 2018).…”
supporting
confidence: 90%
See 1 more Smart Citation
“…Phone embeddings are not able to capture co-occurrence restrictions among consonants such as homorganic nasal-voiced obstruent clusters, voiced obstruent-lateral cluster and homorganic nasal-sibilant-voiceless stop clusters. This observation is similar to one reported in the distributed semantic literature that word embeddings capture similarity better than relatedness (Asr et al, 2018).…”
supporting
confidence: 90%
“…Phone embeddings are not able to capture co-occurrence restrictions among consonants such as homorganic nasal-voiced obstruent clusters, voiced obstruent-lateral cluster and homorganic nasal-sibilant-voiceless stop clusters. This observation is similar to one reported in the distributed semantic literature that word embeddings capture similarity better than relatedness (Asr et al, 2018). Based on insights from the word embedding literature, context embeddings denoted by the hidden to output layer weight matrix, are supposed to be able to capture better syntagmatic relationships like co-occurrence restrictions.…”
Section: Learning Artificial Phonology With Word2vecsupporting
confidence: 75%
“…WordSimilarity-353 (Finkelstein et al, 2001) (EN-WS-353-SIM) is less clear on whether it evaluates similarity or relatedness, as in contrast to its title, human participants were asked "to estimate the relatedness of the words". Lofi (2015) or Asr et al (2018) provide good introductions to the difference between evaluating similarity versus relatedness. Table 1 summarizes the different results obtained with our joint learning approach (with and without antonyms), separate results for first order and second order representations, and word2vec (skip-gram with negative sampling) with and without retrofitting.…”
Section: Resultsmentioning
confidence: 99%
“…WordSimilarity-353 (Finkelstein et al, 2001) (EN-WS-353-SIM) is less clear on whether it evaluates similarity or relatedness, as in contrast to its title, human participants were asked "to estimate the relatedness of the words". Lofi (2015) or Asr et al (2018) provide good introductions to the difference between evaluating similarity versus relatedness.…”
Section: Resultsmentioning
confidence: 99%
“…Word embeddings have been argued to reflect how language users organise concepts (Mandera et al, 2017;Torabi Asr et al, 2018). The extent to which they really do so has been evaluated, e.g., using semantic word similarity and association norms (Hill et al, 2015;Gerz et al, 2016), and word analogy benchmarks (Mikolov et al, 2013c).…”
Section: Introductionmentioning
confidence: 99%