2018
DOI: 10.31234/osf.io/eanku
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Do people ask good questions?

Abstract: People ask questions in order to efficiently learn about the world. But do people ask good questions? In this work, we designed an intuitive, game-based task that allowed people to ask natural language questions to resolve their uncertainty. Question quality was measured through Bayesian ideal-observer models that considered large spaces of possible game states. During free-form question generation, participants asked a creative variety of useful and goal-directed questions, yet they rarely asked the best ques… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…A recent flurry of work has focused on integrating vision and language, leading to creative combinations of computer vision and NLP models. Active research areas include image-caption generation (Chen et al, 2015; Vinyals et al, 2014; Xu et al, 2015), visual question answering (Agrawal et al, 2017; Das et al, 2018; Johnson, Hariharan, van Der Maaten, Fei-Fei, et al, 2017), visual question asking (Mostafazadeh et al, 2016; Rothe et al, 2017; Wang & Lake, 2021), zero-shot visual category learning (Lazaridou et al, 2015; Xian et al, 2017), and instruction following (Hill, Lampinen, et al, 2020; Ruis et al, 2020). The multimodal nature of these tasks grounds the word representations acquired by these models, as we discuss below.…”
Section: Desideratamentioning
confidence: 99%
“…A recent flurry of work has focused on integrating vision and language, leading to creative combinations of computer vision and NLP models. Active research areas include image-caption generation (Chen et al, 2015; Vinyals et al, 2014; Xu et al, 2015), visual question answering (Agrawal et al, 2017; Das et al, 2018; Johnson, Hariharan, van Der Maaten, Fei-Fei, et al, 2017), visual question asking (Mostafazadeh et al, 2016; Rothe et al, 2017; Wang & Lake, 2021), zero-shot visual category learning (Lazaridou et al, 2015; Xian et al, 2017), and instruction following (Hill, Lampinen, et al, 2020; Ruis et al, 2020). The multimodal nature of these tasks grounds the word representations acquired by these models, as we discuss below.…”
Section: Desideratamentioning
confidence: 99%
“…Following a growing body of work in compositional Bayesian models ( 38 , 58 , 59 , 61 , 73 88 ), we assume that the representations learners must discover are built by combining primitives in a language of thought (LOT) ( 49 ) to form the mental analog of programs. In this setup, learners observe data (here, strings) and compare hypotheses that are built out of primitives, as a way to explain the data, much as scientists might consider possible physical laws which are compositions of mathematical operations (e.g., ).…”
Section: Formal Modelmentioning
confidence: 99%
“…More recent work has shown how learners might construct generative mental theories of entire structures like the integers (infinite, ordered, and discrete) from a simpler basis that does not presuppose this conceptual structure, but is able to acquire many different structures across domains (Piantadosi, 2021). This general approach of learning procedures and representations is notable in drawing on inferential processes that have been argued for independently in concept learning (Amalric et al, 2017; Calvo & Symons, 2014; Depeweg et al, 2018; Erdogan et al, 2015; Goodman et al, 2008; Goodman et al, 2015; Lake et al, 2017; Piantadosi & Jacobs, 2016; Romano et al, 2018; Rothe et al, 2017; Rothe et al, 2016; Wang et al, 2019; Yildirim & Jacobs, 2015), potentially showing how number systems may be constructed like other—even artificial—systems of rules that adults acquire and fluidly manipulate.…”
Section: Symbolic Number Learning Is a Difficult Developmental Processmentioning
confidence: 99%