Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1028
|View full text |Cite
|
Sign up to set email alerts
|

Cross-lingual RST Discourse Parsing

Abstract: Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. The treebanks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parser … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
3
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(100 citation statements)
references
References 34 publications
0
88
3
1
Order By: Relevance
“…LLC16 is a CKY chart parser with a hierarchical neural network model (attention-based hierarchical bi-LSTM) (Li et al, 2016). BCS17 mono, BCS17 cross+dev are two variants of a transition-based parser that uses a feed-forward neural network model (Braud et al, 2017). JE14 DPLP is a shift-reduce parser that uses an SVM model (Ji and Eisenstein, 2014).…”
Section: A Sample Of Rst Discourse Parsersmentioning
confidence: 99%
“…LLC16 is a CKY chart parser with a hierarchical neural network model (attention-based hierarchical bi-LSTM) (Li et al, 2016). BCS17 mono, BCS17 cross+dev are two variants of a transition-based parser that uses a feed-forward neural network model (Braud et al, 2017). JE14 DPLP is a shift-reduce parser that uses an SVM model (Ji and Eisenstein, 2014).…”
Section: A Sample Of Rst Discourse Parsersmentioning
confidence: 99%
“…PDTB focuses on shallow discourse relations but ignores the overall discourse structure (Yang and Li 2018), while in this paper we aim to parse discourse structures. As for RST, there have been many approaches including transition-based methods (Braud, Coavoux, and Søgaard 2017;Wang, Li, and Wang 2017;Yu, Zhang, and Fu 2018) and those involving CYK-like algorithms (Joty, Carenini, and Ng 2015;Li, Li, and Chang 2016;Liu and Lapata 2017) or greedy bottom-up algorithms (Feng and Hirst 2014). However, constituency-based RST does not allow non-adjacent relations, which makes it inapplicable for multi-party dialogues.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, state-of-the-art approaches for discourse dependency parsing as mentioned above still rely on handcrafted features or external parsers. Neural networks have recently been widely applied in various NLP tasks, including RST discourse parsing (Li, Li, and Chang 2016;Braud, Coavoux, and Søgaard 2017) and dialogue act recognition (Kumar et al 2018;Chen et al 2018). And (Jia et al 2018a;2018b) also applied neural networks in their transition-based dependency parsing models.…”
Section: Related Workmentioning
confidence: 99%
“…For Morey et al (2017)'s study, they submitted predicted discourse trees from an updated, unpublished version of their parser. 12 In the cross+dev setting, Braud et al (2017) train their parser on RST discourse treebanks for several languages. other parsers except for Feng and Hirst (2014a)'s graph CRF model.…”
Section: Training and Hyperparametersmentioning
confidence: 99%