Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1062
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Speak and Act in a Fantasy Text Adventure Game

Abstract: We introduce a large-scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their pred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
92
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(94 citation statements)
references
References 41 publications
2
92
0
Order By: Relevance
“…The limited human evaluation size limits what can be inferred, but it indicates the problem is solved to the extent that ALOHA is able to slightly outperform humans on two folds and perform closely on another two folds. Even humans do not perform extremely well, demonstrating this task is more difficult than typical dialogue retrieval tasks (Urbanek et al 2019;Dinan et al 2019).…”
Section: Performance: Aloha Vs Humansmentioning
confidence: 99%
See 2 more Smart Citations
“…The limited human evaluation size limits what can be inferred, but it indicates the problem is solved to the extent that ALOHA is able to slightly outperform humans on two folds and perform closely on another two folds. Even humans do not perform extremely well, demonstrating this task is more difficult than typical dialogue retrieval tasks (Urbanek et al 2019;Dinan et al 2019).…”
Section: Performance: Aloha Vs Humansmentioning
confidence: 99%
“…These are trained to produce LSRM-BERT and LSRM-Poly, respectively. (Dinan et al 2019;Urbanek et al 2019;Zhang et al 2018) choose 20 candidate responses, and for comparison purposes, we do the same.…”
Section: Language Style Recovery Module (Lsrm)mentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, although not a sequential decision making problem, Light (Urbanek et al 2019) is a crowdsourced dataset of text-adventure game dialogues. The authors demonstrate that supervised training of transformer-based models have the ability to generate contextually relevant dialog, actions, and emotes.…”
Section: Related Workmentioning
confidence: 99%
“…The aspect of interestingness is particularly difficult to preserve if a game needs specific training [17] or it involves (potentially long) free-text entries produced by the user [18]. Text adventure games provide an example of attractive environments that can naturally lead to building useful language resources [19]. The use of Codenames as the platform to collect valuable lexical data is also given by the popularity of the game and its attractiveness for various players across languages.…”
Section: Related Workmentioning
confidence: 99%