2014
DOI: 10.1017/s1351324913000375
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical reinforcement learning for situated natural language generation

Abstract: Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach forsituated Natural Language Generationin dialogue that is based onhierarchical reinforcement learningand learns the best utterance for a co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(36 citation statements)
references
References 43 publications
(59 reference statements)
0
35
0
Order By: Relevance
“…To our knowledge, we report one of the first multi-domain dialogue system using deep reinforcement learning. Future work includes applying neural-based dialogue systems to larger sets of domains, to language generation using a divide-and-conquer approach [28], to multi-task multimodal interaction using different types of devices and machines, and to evaluate neural-based systems in realistic scenarios with genuine users.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…To our knowledge, we report one of the first multi-domain dialogue system using deep reinforcement learning. Future work includes applying neural-based dialogue systems to larger sets of domains, to language generation using a divide-and-conquer approach [28], to multi-task multimodal interaction using different types of devices and machines, and to evaluate neural-based systems in realistic scenarios with genuine users.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Similarly, gestures can be jointly optimized with natural language generation to exhibit more natural multimodal output. For other kinds of joint optimizations, see [Dethlefs et al 2012b;Dethlefs and Cuayáhuitl 2011b;2011a;Lemon 2011]. (5) Optimize dialogue control by adding linguistically-motivated phenomena such as multimodal turn-taking [Chao and Thomaz 2012; and incremental processing [Schlangen and Skantze 2009;Dethlefs et al 2012a;.…”
Section: Discussionmentioning
confidence: 99%
“…Especially, the decomposition of behavior into subbehaviors leads to faster learning, reduces computational demands, provides opportunities for the reuse of subsolutions, and is more suitable for multi-task systems. The utility of hierarchical RL typically increases with the complexity and size of the state-action space of a system and has also been adopted for dialogue in combination with robust speech recognition [Pineau 2004] and natural language generation [Dethlefs and Cuayáhuitl 2010;. However, previous works in this area have ignored interaction flexibility.…”
Section: Reinforcement Learning For Flexible and Scalable Dialogue Comentioning
confidence: 99%
“…Dethlefs N. and Cuayáhuitl, H. presented novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error [26]. Ferreira, T. C et al introduced nondeterministic method for referring expression generation.…”
Section: Related Workmentioning
confidence: 99%