Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1410
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Linear-Time Framework for Sentence-Level Discourse Parsing

Abstract: We propose an efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory (RST). Our framework comprises a discourse segmenter to identify the elementary discourse units (EDU) in a text, and a discourse parser that constructs a discourse tree in a top-down fashion. Both the segmenter and the parser are based on Pointer Networks and operate in linear time. Our segmenter yields an F 1 score of 95.4, and our parser achieves an F 1 score of 81.7 on the aggregated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
73
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 55 publications
(78 citation statements)
references
References 24 publications
(74 reference statements)
4
73
0
1
Order By: Relevance
“…In all, the training data contains 7321 sentences, and the testing data contains 951 sentences. These numbers match the statistics reported by Lin et al (2019). We follow the same settings as in their experiments and randomly choose 10% of the training data for hyperparameter tuning.…”
Section: Discourse Parsingmentioning
confidence: 63%
See 4 more Smart Citations
“…In all, the training data contains 7321 sentences, and the testing data contains 951 sentences. These numbers match the statistics reported by Lin et al (2019). We follow the same settings as in their experiments and randomly choose 10% of the training data for hyperparameter tuning.…”
Section: Discourse Parsingmentioning
confidence: 63%
“…For discourse parsing, our model uses the same structure as Lin et al (2019). 3 The encoder is a 5-layer bidirectional RNN based on Gated Recurrent Units (BiGRU) (Cho et al, 2014).…”
Section: Model Specifics For Discourse Parsingmentioning
confidence: 99%
See 3 more Smart Citations