Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1115
|View full text |Cite
|
Sign up to set email alerts
|

Search Space Pruning: A Simple Solution for Better Coreference Resolvers

Abstract: There is a significant gap between the performance of a coreference resolution system on gold mentions and on system mentions. This gap is due to the large and unbalanced search space in coreference resolution when using system mentions. In this paper we show that search space pruning is a simple but efficient way of improving coreference resolvers. By incorporating our pruning method in one of the state-of-the-art coreference resolution systems, we achieve the best reported overall score on the CoNLL 2012 Eng… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…deepcoref introduced by Clark and Manning (2016a). However, we show that a simple SVM model that is adapted from our coreferent mention detection approach (Moosavi and Strube, 2016), significantly outperforms the more complex neural models. We show that the SVM model also generalizes better than the neural model on a new domain other than the CoNLL dataset.…”
Section: Introductionmentioning
confidence: 83%
See 3 more Smart Citations
“…deepcoref introduced by Clark and Manning (2016a). However, we show that a simple SVM model that is adapted from our coreferent mention detection approach (Moosavi and Strube, 2016), significantly outperforms the more complex neural models. We show that the SVM model also generalizes better than the neural model on a new domain other than the CoNLL dataset.…”
Section: Introductionmentioning
confidence: 83%
“…singleton). The proposed approaches of Recasens et al (2013), Marneffe et al (2015), and Moosavi and Strube (2016) discriminate mentions for coreference resolution this way.…”
Section: Discourse-old Mentionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Alternatively, as Zhong and Chen (2021) showed the benefit of disentangling the span representations for entity detection and relation extraction in information extraction based on the intuition that they are disparate tasks, one may split the task of anaphoricity decision from mention linking and introduce a separately parameterized anaphoricity module, similarly considering the discrepancy between the two tasks. Recasens et al (2013); Moosavi and Strube (2016); inter alia have pursued similar ideas in the pre-neural era, but it has still not yet been explored with deep models.…”
Section: The Detector's Difficulty With Anaphoricity Decisionsmentioning
confidence: 99%