Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1193
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Vector Spaces for Semantic Relations

Abstract: Word embeddings are used with success for a variety of tasks involving lexical semantic similarities between individual words. Using unsupervised methods and just cosine similarity, encouraging results were obtained for analogical similarities. In this paper, we explore the potential of pre-trained word embeddings to identify generic types of semantic relations in an unsupervised experiment. We propose a new relational similarity measure based on the combination of word2vec's CBOW input and output vectors whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 23 publications
(38 reference statements)
0
6
0
Order By: Relevance
“…In this section, we first compared the pure course2vec model with the course representations from the multi-factor course2vec model using instructor, department, and both as factors. To further explore improvements to performance, we concatenated the primary course representational layer (W n×v in Figure 1) with the output representation layer (W ′ v×n in Figure 1), as demonstrated to be effective in the language domain [10].…”
Section: Course2vec Vs Multi-factor Course2vecmentioning
confidence: 99%
“…In this section, we first compared the pure course2vec model with the course representations from the multi-factor course2vec model using instructor, department, and both as factors. To further explore improvements to performance, we concatenated the primary course representational layer (W n×v in Figure 1) with the output representation layer (W ′ v×n in Figure 1), as demonstrated to be effective in the language domain [10].…”
Section: Course2vec Vs Multi-factor Course2vecmentioning
confidence: 99%
“…They are based on the distributional hypothesis (Harris, 1954) and the assumption that meaning can be encoded in a vector space (Turney and Pantel, 2010;Erk, 2010). These approaches also search latent and independent components that underlie the behavior of words (Gábor et al, 2017;Mikolov et al, 2013a).…”
Section: Motivation For Semantic Unitsmentioning
confidence: 99%
“…Following Gábor et al (2017), we also report the accuracy (A) that would be achieved by the clustering if we assigned every cluster to the class that is most frequent in this cluster and then used the clustering as a classifier. classes (cls) shows how many classes were assigned to at least one of the clusters.…”
Section: Unsupervised Clusteringmentioning
confidence: 99%