2014
DOI: 10.1162/coli_a_00199
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Language Generation in Dialogue using Factored Language Models

Abstract: Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of pre-generated utterances, or (b) using statistics to determine the generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present BAGEL, a fully data-driven generation method that treats the language generation ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
39
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 52 publications
(40 citation statements)
references
References 45 publications
1
39
0
Order By: Relevance
“…While the first NLG systems relied on handwritten rules or templates that were filled with the input information (Cheyer and Guzzoni, 2006;Mirkovic et al, 2006), the availability of larger datasets has accelerated the progress in statistical methods to train NLG systems from data-text pairs in the last twenty years (Oh and Rudnicky, 2000;Mairesse and Young, 2014). Generating output via language models based on recurrent neural networks (RNNs) conditioned on the input (Sutskever et al, 2011) proved to be an effective method for end-to-end NLG (Wen et al, 2015a(Wen et al, ,b, 2016.…”
Section: Input and Output Representationsmentioning
confidence: 99%
“…While the first NLG systems relied on handwritten rules or templates that were filled with the input information (Cheyer and Guzzoni, 2006;Mirkovic et al, 2006), the availability of larger datasets has accelerated the progress in statistical methods to train NLG systems from data-text pairs in the last twenty years (Oh and Rudnicky, 2000;Mairesse and Young, 2014). Generating output via language models based on recurrent neural networks (RNNs) conditioned on the input (Sutskever et al, 2011) proved to be an effective method for end-to-end NLG (Wen et al, 2015a(Wen et al, ,b, 2016.…”
Section: Input and Output Representationsmentioning
confidence: 99%
“…Recent data-driven approaches in NLG have been successful in modeling end-to-end generation from unaligned input-output, cf. (Angeli et al, 2010;Mairesse and Young, 2014;Dušek and Jurcicek, 2015;Wen et al, 2016). However, these system have been mostly tested on datasets (e.g., in the restaurant domain) that require describing very similar entities, entities that are encoded in MRs that have considerable lexical overlap with the target text output.…”
Section: Main Research Questionsmentioning
confidence: 99%
“…Recent approaches cover the use of natural language generation in interactive spoken dialogue systems through models which dynamically adapt to the users' level of expertise [33], a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases [34], and a domain-independent approximation that performs content determination and surface realization in a joint unsupervised fashion through the use of a probabilistic context-free grammar [35].…”
Section: Design Of a Nlg Systemmentioning
confidence: 99%