Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1110
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Model for Word Sense Representation and Disambiguation

Abstract: Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense. 1 The basic idea is that both word sense representation (WS-R) and word sense disambiguation (WS-D) will benefit from each oth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
186
0
1

Year Published

2015
2015
2019
2019

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 218 publications
(191 citation statements)
references
References 23 publications
(19 reference statements)
1
186
0
1
Order By: Relevance
“…Most past canonicalization models use precision, recall, and F1 score to evaluate on the Semeval dataset (Mihalcea et al 2004). The current state-of-the-art performance on Semeval is an F1 score of 75.8% (Chen et al 2014). Since our canonicalization setup is different from the Semeval benchmark (we have an open vocabulary and no annotated ground truth for evaluation), our canonicalization For example, the carriage is mapped to carriage.n.02: a vehicle with wheels drawn by one or more horses.…”
Section: Canonicalization Statisticsmentioning
confidence: 99%
“…Most past canonicalization models use precision, recall, and F1 score to evaluate on the Semeval dataset (Mihalcea et al 2004). The current state-of-the-art performance on Semeval is an F1 score of 75.8% (Chen et al 2014). Since our canonicalization setup is different from the Semeval benchmark (we have an open vocabulary and no annotated ground truth for evaluation), our canonicalization For example, the carriage is mapped to carriage.n.02: a vehicle with wheels drawn by one or more horses.…”
Section: Canonicalization Statisticsmentioning
confidence: 99%
“…(2) We will evaluate the performance of our OIWE models in various NLP applications. (3) We will also investigate possible extensions of our OIWE models, including multiple-prototype models for word sense embeddings (Huang et al, 2012;Chen et al, 2014), semantic compositions for phrase embeddings (Zhao et al, 2015) and knowledge representation (Bordes et al, 2013;Lin et al, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…Previously, other approaches were introduced to utilise embeddings for supervised (Zhong and Ng, 2010;Rothe and Sch眉tze, 2015; Taghipour and Ng, 2015) and knowledge-based WSD (Chen et al, 2014).…”
Section: Related Workmentioning
confidence: 99%