2019
DOI: 10.1609/aaai.v33i01.33016908
|View full text |Cite
|
Sign up to set email alerts
|

Data-to-Text Generation with Content Selection and Planning

Abstract: Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
280
1
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 224 publications
(318 citation statements)
references
References 56 publications
1
280
1
2
Order By: Relevance
“…To improve these models, a number of work [16,28,40] proposed innovating decoding modules based on planning and templates, to ensure factual and coherent mentions of records in generated descriptions. For example, Puduppully et al [28] propose a two-step decoder which first targets specific records and then use them as a plan for the actual text generation.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…To improve these models, a number of work [16,28,40] proposed innovating decoding modules based on planning and templates, to ensure factual and coherent mentions of records in generated descriptions. For example, Puduppully et al [28] propose a two-step decoder which first targets specific records and then use them as a plan for the actual text generation.…”
Section: Related Workmentioning
confidence: 99%
“…To improve these models, a number of work [16,28,40] proposed innovating decoding modules based on planning and templates, to ensure factual and coherent mentions of records in generated descriptions. For example, Puduppully et al [28] propose a two-step decoder which first targets specific records and then use them as a plan for the actual text generation. Similarly, Li et al [16] proposed a delayed copy mechanism where their decoder also acts in two steps: 1) using a classical LSTM decoder to generate delexicalized text and 2) using a pointer network [38] to replace placeholders by records from the input data.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations