2018
DOI: 10.48550/arxiv.1810.04700
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-End Content and Plan Selection for Data-to-Text Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Pointer Generator (See et al, 2017) A LSTMbased seq2seq model with copy mechanism. While originally designed for text summarization, it is also used in data-to-text (Gehrmann et al, 2018). BERT-to-BERT (Rothe et al, 2020) A transformer encoder-decoder model (Vaswani et al, 2017) initialized with BERT (Devlin et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…Pointer Generator (See et al, 2017) A LSTMbased seq2seq model with copy mechanism. While originally designed for text summarization, it is also used in data-to-text (Gehrmann et al, 2018). BERT-to-BERT (Rothe et al, 2020) A transformer encoder-decoder model (Vaswani et al, 2017) initialized with BERT (Devlin et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…A prime example is the delexicalization technique used by most current generators (e.g., Oh and Rudnicky, 2000;Mairesse et al, 2010;Wen et al, 2015a,b;Juraska et al, 2018): It is generally assumed that attribute (slot) values from the input meaning representation (MR) can be replaced by placeholders during generation and inserted into the output verbatim. Delexicalization or an analogous technique, such as a copy mechanism (Gu et al, 2016;Gehrmann et al, 2018), is required for most generation scenarios to allow generalization to unseen entity names: sets of entities are open (potentially infinite and subject to change) while training data is scarce. However, the verbatim insertion assumption does not hold for languages with extensive noun inflection -attribute values need to be inflected here to produce fluent outputs (see Figure 1).…”
Section: Introductionmentioning
confidence: 99%