Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1409
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution

Abstract: Recognizing coreferring events and entities across multiple texts is crucial for many NLP applications. Despite the task's importance, research focus was given mostly to withindocument entity coreference, with rather little attention to the other variants. We propose a neural architecture for cross-document coreference resolution. Inspired by Lee et al.(2012), we jointly model entity and event coreference. We represent an event (entity) mention using its lexical span, surrounding context, and relation to entit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
216
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 83 publications
(234 citation statements)
references
References 24 publications
1
216
0
Order By: Relevance
“…The current state of the art model by is based on Lee et al (2017 and uses a modern BERT-based (Devlin et al, 2019) architecture. Comparatively, CDCR, which involves co-reference resolution across multiple documents, has received less attention in recent years (Bagga and Baldwin, 1998;Rao et al, 2010;Dutta and Weikum, 2015;Barhom et al, 2019). Cattan et al (2020) jointly learns both entity and event co-reference tasks, achieving current state of the art performance for CDCR, and as such provides a strong baseline for experiments in CD 2 CR.…”
Section: Co-reference Resolutionmentioning
confidence: 99%
See 3 more Smart Citations
“…The current state of the art model by is based on Lee et al (2017 and uses a modern BERT-based (Devlin et al, 2019) architecture. Comparatively, CDCR, which involves co-reference resolution across multiple documents, has received less attention in recent years (Bagga and Baldwin, 1998;Rao et al, 2010;Dutta and Weikum, 2015;Barhom et al, 2019). Cattan et al (2020) jointly learns both entity and event co-reference tasks, achieving current state of the art performance for CDCR, and as such provides a strong baseline for experiments in CD 2 CR.…”
Section: Co-reference Resolutionmentioning
confidence: 99%
“…Cattan et al (2020) jointly learns both entity and event co-reference tasks, achieving current state of the art performance for CDCR, and as such provides a strong baseline for experiments in CD 2 CR. Both Cattan et al (2020) and Barhom et al (2019) models are trained and evaluated using the ECB+ corpus (Cybulska and Vossen, 2014) which contains news articles annotated with both entity and event mentions.…”
Section: Co-reference Resolutionmentioning
confidence: 99%
See 2 more Smart Citations
“…ECB+ contains 982 documents clustered into 43 topics, and has two evaluation settings: coreferring mentions occurring within a single document (within document) or across a document collection (cross document). For the event co-reference pipeline, we follow the joint modeling method of Barhom et al (2019) where they jointly represented entity and event mentions with various features and learned a pairwise mention/entity scorer for coreference classification. We augment their mention features with the mention's vector representations extracted from either GPT 2.0 or our zero-shot augmented GPT 2.0.…”
Section: Event Co-referencementioning
confidence: 99%