Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1018
|View full text |Cite
|
Sign up to set email alerts
|

Counter-fitting Word Vectors to Linguistic Constraints

Abstract: In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements acro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
297
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 265 publications
(318 citation statements)
references
References 24 publications
3
297
1
Order By: Relevance
“…However, we often need embeddings to be similar only if an exact lexico-semantic relation holds between the words. Numerous methods for specializing word embeddings for particular relations have been proposed (Yu and Dredze, 2014;Faruqui et al, 2015;Kiela et al, 2015;Mrkšić et al, 2016, inter alia), primarily aiming to differentiate synonymic similarity from other types of semantic relatedness.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…However, we often need embeddings to be similar only if an exact lexico-semantic relation holds between the words. Numerous methods for specializing word embeddings for particular relations have been proposed (Yu and Dredze, 2014;Faruqui et al, 2015;Kiela et al, 2015;Mrkšić et al, 2016, inter alia), primarily aiming to differentiate synonymic similarity from other types of semantic relatedness.…”
Section: Related Workmentioning
confidence: 99%
“…Having an explicit embedding specialization function alleviates the need to specialize the entire unspecialized embedding space at once, like existing models do (Faruqui et al, 2015;Mrkšić et al, 2016).…”
Section: Dual Tensor Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Although OOVs are a significant challenge, we believe they can be overcome by training better sentiment-sensitized word embeddings (Mrkšić et al, 2016), or combining the system with character-level normalization methods (Han and Baldwin, 2011).…”
Section: Buildermentioning
confidence: 99%