Proceedings of the First Workshop on Computational Approaches to Discourse 2020
DOI: 10.18653/v1/2020.codi-1.10
|View full text |Cite
|
Sign up to set email alerts
|

Joint Modeling of Arguments for Event Understanding

Abstract: We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots. The approach allows for joint consideration of argument candidates given a detected event, which we illustrate leads to state-of-the-art performance in multi-sentence argument linking. 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(20 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…Our pipeline consisted of multilingual coreference resolution (using predetermined mention spans), the hierarchical entity typing model, and a separate state-of-the-art argument linking model (Chen et al, 2020b). We found improved performance 13 with entity coreference (from 29.1 F1 to 33.3 F1), especially in Russian (from 26.2 F1 to 33.3 F1), likely due to our use of multilingual data and encoders.…”
Section: And Match Modules: Sm-kbpmentioning
confidence: 99%
“…Our pipeline consisted of multilingual coreference resolution (using predetermined mention spans), the hierarchical entity typing model, and a separate state-of-the-art argument linking model (Chen et al, 2020b). We found improved performance 13 with entity coreference (from 29.1 F1 to 33.3 F1), especially in Russian (from 26.2 F1 to 33.3 F1), likely due to our use of multilingual data and encoders.…”
Section: And Match Modules: Sm-kbpmentioning
confidence: 99%
“…We use Unified-QA (Khashabi et al, 2020) as the ENCODER described in Section 4.2. Since our experiment setting is inherently the same as by and Chen et al (2020c), we follow prior work and use span-level F1 as the evaluation metric. We take Chen et al (2020c) as our baseline, shown as JOINTARG in the tables, since it obtains previous state-of-the-art results on both ACE and RAMS using gold mention spans.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Since our experiment setting is inherently the same as by and Chen et al (2020c), we follow prior work and use span-level F1 as the evaluation metric. We take Chen et al (2020c) as our baseline, shown as JOINTARG in the tables, since it obtains previous state-of-the-art results on both ACE and RAMS using gold mention spans. As far as we are aware, we are the first to publish results on the Granular dataset with this experimental setup.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Our pipeline consisted of the multilingual coreference resolution (using the predetermined mention from GAIA) and hierarchical entity typing models discussed in this paper, followed by a separate state-of-the-art argument linking model (Chen et al, 2020b). We found improved performance 17 with entity coreference (from 29.1 F1 to 33.3 F1), especially in Russian (from 26.2 F1 to 33.3 F1), likely due to our use of multilingual data and contextualized encoders.…”
Section: And Match Modules: Sm-kbpmentioning
confidence: 99%