2020
DOI: 10.48550/arxiv.2002.02878
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

I love your chain mail! Making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents

Abstract: Dialogue research tends to distinguish between chit-chat and goal-oriented tasks. While the former is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a straightforward learning signal. Humans effortlessly combine the two, for example engaging in chit-chat with the goal of exchanging information or eliciting a specific response. Here, we bridge the divide between these two domains in the setting of a rich multi-player text-based fantasy environment where agents and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 25 publications
(33 reference statements)
0
3
0
Order By: Relevance
“…Visual and embodied agents using natural language instructions (Bisk et al, 2016;Kolve et al, 2017;Anderson et al, 2018) or in language-based action spaces (Das et al, 2017) utilize interactivity and environment grounding but have no notion of agent motivations, nor make any attempt to explicitly model commonsense reasoning. Perhaps closest in spirit to this work is Prabhumoye et al (2020), where they use artificially selected goals in LIGHT and train RL agents to achieve them. Similarly to the others, this work does not contain the motivations provided by LIGHT-Quests nor any modeling of commonsense reasoning.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Visual and embodied agents using natural language instructions (Bisk et al, 2016;Kolve et al, 2017;Anderson et al, 2018) or in language-based action spaces (Das et al, 2017) utilize interactivity and environment grounding but have no notion of agent motivations, nor make any attempt to explicitly model commonsense reasoning. Perhaps closest in spirit to this work is Prabhumoye et al (2020), where they use artificially selected goals in LIGHT and train RL agents to achieve them. Similarly to the others, this work does not contain the motivations provided by LIGHT-Quests nor any modeling of commonsense reasoning.…”
Section: Related Workmentioning
confidence: 99%
“…The environment as seen in Figure 4 consists of three components. The first is a partner agent, which is a model trained to play other agents in the game, as in Prabhumoye et al (2020). Next is the game engine, which determines the effects of actions on the underlying game graph (Urbanek et al, 2019).…”
Section: Light Rl Environmentmentioning
confidence: 99%
See 1 more Smart Citation