2021
DOI: 10.1007/978-3-030-89363-7_9
|View full text |Cite
|
Sign up to set email alerts
|

Generating Pseudo Connectives with MLMs for Implicit Discourse Relation Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…[45] enriches the training dataset by incorporating the corpus of explicitly related arguments. [46,47] expanded pseudo-labeled samples by training the explicit classifier. [48] utilizing explicit data for one-teacher multi-data multi-task learning.…”
Section: Implicit Discourse Relation Recognitionmentioning
confidence: 99%
“…[45] enriches the training dataset by incorporating the corpus of explicitly related arguments. [46,47] expanded pseudo-labeled samples by training the explicit classifier. [48] utilizing explicit data for one-teacher multi-data multi-task learning.…”
Section: Implicit Discourse Relation Recognitionmentioning
confidence: 99%
“…In discourse parsing, it is well-known that there exists a clear gap between explicit (relations that are marked explicitly with a DM) and implicit (relation between two spans of text exists, but is not marked explicitly with a DM) relation classification, namely, 90% vs. 50% of accuracy (respectively) in 4-way classification (as indicated by Shi and Demberg [60]). To improve discourse relation parsing, several works focused on enhancing their systems for implicit relation classification: removing DMs from explicit relations for implicit relation classification data augmentation [6,58]; framing explicit vs. implicit relation classification as a domain adaptation problem [26,53]; learning sentence representations by exploring automatically collected large-scale datasets [44,61]; multi-task learning [31,43]; automatic explicitation of implicit DMs followed by explicit relation classification [28,29,60].…”
Section: Rocha Et Al / Cross-genre Argument Mining: Automatically Fil...mentioning
confidence: 99%
“…However, we work in a more challenging scenario, where the DM augmentation is performed at the paragraph level following an end-to-end approach (i.e., from raw text to a DM-augmented text). Consequently, our approach differs from prior work in multiple ways: (a) we aim to explicitate all discourse relations at the paragraph level, while prior work focuses on one relation at a time [28,29,60,64] (our models can explore a wider context window and take into account the interdependencies between different discourse relations), and (b) we do not require any additional information, such as prior knowledge regarding discourse units boundaries (e.g., clauses or sentences) [28,29,60,64] or the target discourse relations [64].…”
Section: Rocha Et Al / Cross-genre Argument Mining: Automatically Fil...mentioning
confidence: 99%
“…Pitler and Nenkova (2009) train a classifier with connectives in the text as the only features and find it could achieve over 90% accuracy on explicit relation recognition. Similarly, many attempts have been made using connectives to improve the recognition performance on implicit relations, including pipeline methods (Zhou et al, 2010;Jiang et al 2021), multi-task training (Kishimoto et al, 2020;Long and Webber, 2022), adversarial training (Qin et al, 2017), joint training (Liu and Strube, 2023), and prompt learning (Zhou et al, 2022;Xiang et al, 2023). Our work differs from them in both motivation and application scenarios.…”
Section: Related Workmentioning
confidence: 99%