2023
DOI: 10.1109/taslp.2023.3306710
|View full text |Cite
|
Sign up to set email alerts
|

RST Discourse Parsing as Text-to-Text Generation

Xinyu Hu,
Xiaojun Wan
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 42 publications
0
0
0
Order By: Relevance
“…However, most of them split the parsing process into two steps: EDU segmentation and RST tree prediction, for which the gold EDU labels are often required. Considering that the datasets of DocMT are not equipped with such information, we follow Hu and Wan (2023) to train an end-to-end RST parser from scratch through a Seq2Seq reformulation method.…”
Section: Rst Parsingmentioning
confidence: 99%
See 2 more Smart Citations
“…However, most of them split the parsing process into two steps: EDU segmentation and RST tree prediction, for which the gold EDU labels are often required. Considering that the datasets of DocMT are not equipped with such information, we follow Hu and Wan (2023) to train an end-to-end RST parser from scratch through a Seq2Seq reformulation method.…”
Section: Rst Parsingmentioning
confidence: 99%
“…The linearized sequence is designed to contain the complete original input text for better performance, according to the observation of Paolini et al (2021). More details can be found in Hu and Wan (2023), and an example is shown in Figure 3(d).…”
Section: Rst Parsingmentioning
confidence: 99%
See 1 more Smart Citation