Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1016
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation by Association: A Systematic Study of Quantitative Word Association Evaluation

Abstract: Recent work on evaluating representation learning architectures in NLP has established a need for evaluation protocols based on subconscious cognitive measures rather than manually tailored intrinsic similarity and relatedness tasks. In this work, we propose a novel evaluation framework that enables large-scale evaluation of such architectures in the free word association (WA) task, which is firmly grounded in cognitive theories of human semantic representation. This evaluation is facilitated by the existence … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
4
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 65 publications
2
4
0
Order By: Relevance
“…The analysis of correlation between model rankings on full SimVerb and SpA-Verb again produces a high correlation score (ρ = 0.77). These figures are enlightening when compared with similar analyses in previous work (Vulić, Kiela, and Korhonen 2017). While Vulić, Kiela, and Korhonen (2017) report very high correlations between model rankings on SimLex and SimVerb (>0.95), both of which measure semantic similarity, the scores are much lower between model rankings on SimLex or SimVerb and MEN (Bruni, Tran, and Baroni 2014) (0.342 and 0.448, respectively), a data set which captures broader conceptual relatedness.…”
Section: Modelsupporting
confidence: 69%
“…The analysis of correlation between model rankings on full SimVerb and SpA-Verb again produces a high correlation score (ρ = 0.77). These figures are enlightening when compared with similar analyses in previous work (Vulić, Kiela, and Korhonen 2017). While Vulić, Kiela, and Korhonen (2017) report very high correlations between model rankings on SimLex and SimVerb (>0.95), both of which measure semantic similarity, the scores are much lower between model rankings on SimLex or SimVerb and MEN (Bruni, Tran, and Baroni 2014) (0.342 and 0.448, respectively), a data set which captures broader conceptual relatedness.…”
Section: Modelsupporting
confidence: 69%
“…Semantic similarity and association overlap to some degree, but do not coincide (Kiela, Hill, and Clark 2015;Vulić, Kiela, and Korhonen 2017). In fact, there exist plenty of pairs that are intuitively associated but not similar.…”
Section: Similarity and Associationmentioning
confidence: 99%
“…As compared to visual embeddings used in previous works, we found that denotational embeddings are particularly useful for detecting semantic relations. Other, recently proposed tasks related to modeling word association (Vulić et al, 2017), commonsense knowledge (Vedantam et al, 2015) or child-directed input (Lazaridou et al, 2016) provide interesting testbeds for future work.…”
Section: Discussionmentioning
confidence: 99%