Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing - EMNLP '06 2006
DOI: 10.3115/1610075.1610157
|View full text |Cite
|
Sign up to set email alerts
|

Two graph-based algorithms for state-of-the-art WSD

Abstract: This paper explores the use of two graph algorithms for unsupervised induction and tagging of nominal word senses based on corpora. Our main contribution is the optimization of the free parameters of those algorithms and its evaluation against publicly available gold standards. We present a thorough evaluation comprising supervised and unsupervised modes, and both lexical-sample and all-words tasks. The results show that, in spite of the information loss inherent to mapping the induced senses to the gold-stand… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
61
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(64 citation statements)
references
References 13 publications
(26 reference statements)
1
61
0
Order By: Relevance
“…Graph-based clustering for WSI has a long history and many different variations (Lin et al, 1998;Pantel and Lin, 2002; Dorow and Widdows, 2003;Véronis, 2004;Agirre et al, 2006; Biemann, 2006;Navigli and Crisafulli, 2010; Hope and Keller, 2013; Di Marco and Navigli, 2013;Mitra et al, 2014;Pelevina et al, 2016). In general, the method is to first retrieve words similar or related to each target word as nodes, measure the similarity/relatedness between the words to form an ego graph/network, and either group the nodes by graph clustering or find hubs or representative nodes in the graph using HyperLex (Véronis, 2004) or PageRank (Agirre et al, 2006).…”
Section: Clustering Related Wordsmentioning
confidence: 99%
See 2 more Smart Citations
“…Graph-based clustering for WSI has a long history and many different variations (Lin et al, 1998;Pantel and Lin, 2002; Dorow and Widdows, 2003;Véronis, 2004;Agirre et al, 2006; Biemann, 2006;Navigli and Crisafulli, 2010; Hope and Keller, 2013; Di Marco and Navigli, 2013;Mitra et al, 2014;Pelevina et al, 2016). In general, the method is to first retrieve words similar or related to each target word as nodes, measure the similarity/relatedness between the words to form an ego graph/network, and either group the nodes by graph clustering or find hubs or representative nodes in the graph using HyperLex (Véronis, 2004) or PageRank (Agirre et al, 2006).…”
Section: Clustering Related Wordsmentioning
confidence: 99%
“…In general, the method is to first retrieve words similar or related to each target word as nodes, measure the similarity/relatedness between the words to form an ego graph/network, and either group the nodes by graph clustering or find hubs or representative nodes in the graph using HyperLex (Véronis, 2004) or PageRank (Agirre et al, 2006).…”
Section: Clustering Related Wordsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then vectors are clustered and the resulting clusters represent the word senses. Recently some works have developed graph-based methods to achieve WSI [13,29,30]. Typically these works select contexts of a given ambiguous word w and assign every word appearing in these contexts to a node of the graph.…”
Section: A Descriptionmentioning
confidence: 99%
“…In a graph of this type, the vertices correspond to the words appearing in the contexts of the target words and the edges represent their relations. These relations may be grammatical [12] or they may be cooccurrences of the words in fixed contexts [13,14]. The senses of the target words are discovered by partitioning the co-occurrence graph using clustering techniques, or by using a PageRank algorithm.…”
Section: Inducing Word Senses On a Per-word Basismentioning
confidence: 99%