Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1291
|View full text |Cite
|
Sign up to set email alerts
|

Syntactic Dependencies and Distributed Word Representations for Analogy Detection and Mining

Abstract: Distributed word representations capture relational similarities by means of vector arithmetics, giving high accuracies on analogy detection. We empirically investigate the use of syntactic dependencies on improving Chinese analogy detection based on distributed word representations, showing that a dependency-based embeddings does not perform better than an ngram-based embeddings, but dependency structures can be used to improve analogy detection by filtering candidates. In addition, we show that distributed r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Embedding evaluation was performed on Chinese WordSim (CWS), a transcription from English WS-353 by two undergraduate students with excellent English understandings (Qiu, Zhang, & Lu, 2015). The similarity scores containing in CWS are rescored by 20 native Chinese speakers and used in Semeval-2012 task 4 (Jin & Wu, 2012).…”
Section: Abnormal Dimensions In Chinese Word Embeddingmentioning
confidence: 99%
“…Embedding evaluation was performed on Chinese WordSim (CWS), a transcription from English WS-353 by two undergraduate students with excellent English understandings (Qiu, Zhang, & Lu, 2015). The similarity scores containing in CWS are rescored by 20 native Chinese speakers and used in Semeval-2012 task 4 (Jin & Wu, 2012).…”
Section: Abnormal Dimensions In Chinese Word Embeddingmentioning
confidence: 99%
“…Since distributed representations play an important role in various NLP tasks, they are applied to semantics (Herbelot and Vecchi, 2015;Qiu et al, 2015;Woodsend and Lapata, 2015), with incorporating external information to them (Tian et al, 2016;Nguyen et al, 2016). In addition, finding interpretable regularities from the representations is often conducted through non-negative and sparse coding (Murphy et al, 2012;Faruqui et al, 2015;Luo et al, 2015;Kober et al, 2016), and regularization (Sun et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Le and Mikolov (Le and Mikolov, 2014) proposed an unsupervised algorithm that learns fixed-length feature representations from variable-length texts. Qiu et al (Qiu et al, 2015) explored distributed representations of words to detect analogies. In this paper, we exploit the distributed representation approach to transform item descriptions to vectors, and assist recommendation based on these vectors.…”
Section: Related Workmentioning
confidence: 99%