Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1007
|View full text |Cite
|
Sign up to set email alerts
|

Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

Abstract: The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
137
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 86 publications
(141 citation statements)
references
References 30 publications
4
137
0
Order By: Relevance
“…We generated a list of candidate hypernyms for each target word, and then employed unsupervised hypernymy detection measures to decide whether a hypernymy relation holds. We used the opensource code by Shwartz et al (2017). 15 Our baseline starts by creating a distributional semantic model (DSM) for each domain/language (English, Spanish, Italian, Music and Medical).…”
Section: Unsupervised Baselinesmentioning
confidence: 99%
See 3 more Smart Citations
“…We generated a list of candidate hypernyms for each target word, and then employed unsupervised hypernymy detection measures to decide whether a hypernymy relation holds. We used the opensource code by Shwartz et al (2017). 15 Our baseline starts by creating a distributional semantic model (DSM) for each domain/language (English, Spanish, Italian, Music and Medical).…”
Section: Unsupervised Baselinesmentioning
confidence: 99%
“…Similarly to the hyponym selection step (Section 4.1.3), all the terms with frequency of at least 3 occurrences in the source corpus are considered as valid targets. For the context words, instead, we required a minimum of 100 occurrences, as in Shwartz et al (2017). To generate candidates, we took the 50 most similar terms for each target word via cosine similarity in the DSM.…”
Section: Unsupervised Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…All distributional models we evaluate achieve poorer performance on meronymy than hypernymy detection, especially considering that WN-Me is a balanced dataset, whereas HypeNet is heavily skewed towards negative instances. Shwartz et al (2017) propose ranking as an alternative evaluation setting for hypernymy detection. The goal is to rank positive relation pairs higher than negative ones.…”
Section: Classification Experimentsmentioning
confidence: 99%