Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1028
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Morphological Regularities in Distributional Word Representations

Abstract: We present a simple, fast and unsupervised approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% higher than the previous best known system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…However, we use more expressive, nonlinear functions to model derivational transformations and report positive results. Gupta et al (2017) then learn a linear transformation per orthographic rule to solve a word analogy task. Our distributional model learns a function per derivational transformation, not per orthographic rule, which allows it to generalize to unseen orthography.…”
Section: Strengths Of Seq and Distmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we use more expressive, nonlinear functions to model derivational transformations and report positive results. Gupta et al (2017) then learn a linear transformation per orthographic rule to solve a word analogy task. Our distributional model learns a function per derivational transformation, not per orthographic rule, which allows it to generalize to unseen orthography.…”
Section: Strengths Of Seq and Distmentioning
confidence: 99%
“…We propose to learn a function for each transformation in a low dimensional vector space that corresponds to mapping from representations of the root word to the derived word. This eliminates the reliance on orthographic information, unlike related approaches to distributional semantics, which operate at the suffix level (Gupta et al, 2017).…”
Section: Introductionmentioning
confidence: 99%