Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1225
|View full text |Cite
|
Sign up to set email alerts
|

Prune-and-Score: Learning for Greedy Coreference Resolution

Abstract: We propose a novel search-based approach for greedy coreference resolution, where the mentions are processed in order and added to previous coreference clusters. Our method is distinguished by the use of two functions to make each coreference decision: a pruning function that prunes bad coreference decisions from further consideration, and a scoring function that then selects the best among the remaining decisions. Our framework reduces learning of these functions to rank learning, which helps leverage powerfu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(28 citation statements)
references
References 26 publications
0
28
0
Order By: Relevance
“…The majority of useful features for coreference systems operate on pairs of mentions (in one of our experiments we show the addition of classic entity-level features does not improve our system), but incremental coreference systems must make decisions involving many mention pairs. Other incremental coreference systems either incorporate features from a single pair (Stoyanov and Eisner, 2012) or average features across all pairs in the involved clusters (Ma et al, 2014). Our system instead combines information from the involved mention pairs in a variety of ways with with higher order features produced from the scores of mention pair models.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…The majority of useful features for coreference systems operate on pairs of mentions (in one of our experiments we show the addition of classic entity-level features does not improve our system), but incremental coreference systems must make decisions involving many mention pairs. Other incremental coreference systems either incorporate features from a single pair (Stoyanov and Eisner, 2012) or average features across all pairs in the involved clusters (Ma et al, 2014). Our system instead combines information from the involved mention pairs in a variety of ways with with higher order features produced from the scores of mention pair models.…”
Section: Related Workmentioning
confidence: 99%
“…In Table 3 we compare the results of our system with the following state-of-the-art approaches: the JOINT and INDEP models of the Berkeley system (Durrett and Klein, 2014) (the JOINT model jointly does NER and entity linking along with coreference); the Prune-and-Score system (Ma et al, 2014); the HOTCoref system (Björkelund and Kuhn, 2014); the CPL 3 M sytem (Chang et al, 2013); and Fernandes et al We use the full entitycentric clustering algorithm drawing upon scores from both pairwise models. We do not make use of agreement features, as these did not increase accuracy and complicate the system.…”
Section: Final System Performancementioning
confidence: 99%
See 2 more Smart Citations
“…Among those, anaphoricity detection is the most popular method (e.g. Ng and Cardie (2002), Denis and Baldridge (2007), Ng (2009), Zhou andKong (2009), Durrett andKlein (2013), Martschat and Strube (2015), Wiseman et al (2015), Peng et al (2015), and Lassalle and Denis (2015)), while singleton detection is a more recent method (Recasens et al, 2013;Ma et al, 2014;Marneffe et al, 2015).…”
Section: Introductionmentioning
confidence: 99%