2018
DOI: 10.1162/tacl_a_00037
|View full text |Cite
|
Sign up to set email alerts
|

Planning, Inference and Pragmatics in Sequential Language Games

Abstract: We study sequential language games in which two players, each with private information, communicate to achieve a common goal. In such games, a successful player must (i) infer the partner's private information from the partner's messages, (ii) generate messages that are most likely to help with the goal, and (iii) reason pragmatically about the partner's strategy. We propose a model that captures all three characteristics and demonstrate their importance in capturing human behavior on a new goal-oriented datas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(23 citation statements)
references
References 21 publications
0
17
0
Order By: Relevance
“…In other words, a model that could generate a sequence of questions and closed “yes/no” answers, which progressively reduce uncertainty about the subject of conversation (i.e., contextual knowledge). These sorts of sequential communication games have extensively been tackled in the literature: including one round of question-answer ‘whisky pricing’ interaction ( Hawkins et al, 2015 ), playing restricted ‘cards corpus’ with one-off communication ( Potts, 2012 ), sequential ‘info-jigsaw’ game ( Khani et al, 2018 ), ‘hat game’ where agents learn to communicate via observing actions ( Foerster et al, 2016 ) and conversations about visual stimulus ( Das et al, 2017 ). Having specified the generative model for our “Twenty Questions” paradigm, we made no further assumptions—we used off-the-shelf (marginal) message passing to simulate neuronal processing ( Dauwels, 2007 ; Friston et al, 2017c ; Parr and Friston, 2018 ; Winn and Bishop, 2005 ).…”
Section: Generative Models Of Languagementioning
confidence: 99%
“…In other words, a model that could generate a sequence of questions and closed “yes/no” answers, which progressively reduce uncertainty about the subject of conversation (i.e., contextual knowledge). These sorts of sequential communication games have extensively been tackled in the literature: including one round of question-answer ‘whisky pricing’ interaction ( Hawkins et al, 2015 ), playing restricted ‘cards corpus’ with one-off communication ( Potts, 2012 ), sequential ‘info-jigsaw’ game ( Khani et al, 2018 ), ‘hat game’ where agents learn to communicate via observing actions ( Foerster et al, 2016 ) and conversations about visual stimulus ( Das et al, 2017 ). Having specified the generative model for our “Twenty Questions” paradigm, we made no further assumptions—we used off-the-shelf (marginal) message passing to simulate neuronal processing ( Dauwels, 2007 ; Friston et al, 2017c ; Parr and Friston, 2018 ; Winn and Bishop, 2005 ).…”
Section: Generative Models Of Languagementioning
confidence: 99%
“…Our proposed extension maintaining a (belief) state could be used to define a Markov Decision Process where different question or answer selections lead to anticipated transitions in beliefs about goals or the world, respectively. A related approach was recently taken by Khani, Goodman, and Liang (2018), who used a forward-looking recursive utility to define an action policy in a sequential language game. In this task, pragmatic agents took turns saying one-or two-word utterances, informing their partner about the world state, but were not permitted to ask questions.…”
Section: Discussionmentioning
confidence: 99%
“…The rational speech act (RSA) model raised by Golland et al [24] and developed in [22,54,25,2] accommodates the idea of using not only the utterance but also the selection of the utterance to understand the speaker. Previous works in these areas are mainly from human action interpretation [21,60,36], language emergence [69,35], linguistics [33,2,13] and cognitive science [25,8] perspectives. To our knowledge, our work is the first to relate pedagogy and recursive reasoning to generic parameter learning and shows a provable improvement.…”
Section: Related Workmentioning
confidence: 99%