Proceedings of the Nineteenth Conference on Computational Natural Language Learning 2015
DOI: 10.18653/v1/k15-1026
|View full text |Cite
|
Sign up to set email alerts
|

Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction

Abstract: We present a novel word level vector representation based on symmetric patterns (SPs). For this aim we automatically acquire SPs (e.g., "X and Y") from a large corpus of plain text, and generate vectors where each coordinate represents the cooccurrence in SPs of the represented word with another word of the vocabulary. Our representation has three advantages over existing alternatives: First, being based on symmetric word relationships, it is highly suitable for word similarity prediction. Particularly, on the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
100
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 94 publications
(103 citation statements)
references
References 32 publications
3
100
0
Order By: Relevance
“…2(a) and e.g. (Schwartz et al, 2015)), verbs are apparently easier to model in Italian (Fig. 2(c) Table 3: Rankings based on Acc 1 scores over syntactic analogy groups (from the Google dataset).…”
Section: Resultsmentioning
confidence: 99%
“…2(a) and e.g. (Schwartz et al, 2015)), verbs are apparently easier to model in Italian (Fig. 2(c) Table 3: Rankings based on Acc 1 scores over syntactic analogy groups (from the Google dataset).…”
Section: Resultsmentioning
confidence: 99%
“…With the aim to resolve the sentiment contrast, a sentiment-specific word embedding [54] is learnt by weakly-supervised tweets collected by positive and negative emotions. Some neural network based models are proposed to revisit word embedding for lexical contrast [7][8][9][10]. There are two ways to get the contrasting pairs for learning embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [7] and Mrksic et al [8] both use lexical resources to get antonym pairs; however, this is the small part of contrasting pairs. Schwartz et al [9] apply patterns to get contrasting pairs from web text such as a Wikipedia page. This method also meet the low coverage problem, because the number of the contrasting pairs which can be described by "from X to Y" or "either X or Y" is limited.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations