Proceedings of the 2nd Shared Task on Discourse Relation Parsing and Treebanking (DISRPT 2021) 2021
DOI: 10.18653/v1/2021.disrpt-1.1
|View full text |Cite
|
Sign up to set email alerts
|

The DISRPT 2021 Shared Task on Elementary Discourse Unit Segmentation, Connective Detection, and Relation Classification

Abstract: In 2021, we organized the second iteration of a shared task dedicated to the underlying units used in discourse parsing across formalisms: the DISRPT Shared Task (Discourse Relation Parsing and Treebanking). Adding to the 2019 tasks on Elementary Discourse Unit Segmentation and Connective Detection, this iteration of the Shared Task included for the first time a track on discourse relation classification across three formalisms: RST, SDRT, and PDTB. In this paper we review the data included in the Shared Task,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 29 publications
(25 reference statements)
0
6
0
Order By: Relevance
“…The latter, meanwhile, connects the EDU-boundary representations, which enhances the model's ability to capture long-range dependencies between EDUs. 2 We could not compare our segmentation results with the DISRPT 2019 Shared Task (Zeldes et al, 2019) participants. We found few inconsistencies in the settings.…”
Section: Ablation Studymentioning
confidence: 99%
“…The latter, meanwhile, connects the EDU-boundary representations, which enhances the model's ability to capture long-range dependencies between EDUs. 2 We could not compare our segmentation results with the DISRPT 2019 Shared Task (Zeldes et al, 2019) participants. We found few inconsistencies in the settings.…”
Section: Ablation Studymentioning
confidence: 99%
“…Computational analysis of discourse has been the focus of several shared tasks (Xue et al, , 2016Zeldes et al, 2019Zeldes et al, , 2021, and there have been several discourse-annotated corpora for multiple languages (Zeyrek and Webber, 2008;Meyer et al, 2011;Danlos et al, 2012;Zhou and Xue, 2015;Zeyrek et al, 2020;da Cunha et al, 2011;Das and Stede, 2018;Afantenos et al, 2012). Despite their widespread use, implicit sense classification remains a challenging task (Liang et al, 2020), and discourse models have been shown not to perform well under even gradual domain shift (Atwell et al, 2021), which may be the result of the limited timeframe and distribution of the articles contained in the most commonly used English discourse datasets, the Penn Discourse Treebank (Miltsakaki et al, 2004;Prasad et al, 2008;Webber et al, 2019) and the RST Discourse Treebank (RST-DT) (Carlson et al, 2001).…”
Section: Discourse and Domain Shiftmentioning
confidence: 99%
“…Outside the PDTB-3 framework, intra-sentential discourse relations are handled by (1) identifying discourse units (DUs), (2) attaching them to one another, and (3) associating the attachment with a coherence relation (Muller et al, 2012). One can therefore ask why we did not simply adopt this framework in the PDTB-3 and exploit the relatively good performance by systems in the DISRPT shared task on sentence-level discourse unit segmentation (Zeldes et al, 2019). There are two main reasons: First, DISRPT (and the approaches to discourse structure it covers) assumes that discourse segments cover a sentence with a non-overlapping partition.…”
Section: Related Workmentioning
confidence: 99%
“…Of course, there are "work-arounds" for over-segmentation, such as RST's use of a SAME-SEGMENT relation (Mann and Thompson, 1988), and under-segmentation can be addressed through additional segmentation. However, we decided that starting from scratch would allow us to clearly identify the problems of parsing intra-sentential implicits, at which point, we could consider what we could adopt from work done on the DISRPT shared task on sentence-level discourse unit segmentation (Zeldes et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation