2014
DOI: 10.1016/j.cogpsych.2014.06.003
|View full text |Cite
|
Sign up to set email alerts
|

Win-Stay, Lose-Sample: A simple sequential algorithm for approximating Bayesian inference

Abstract: People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

7
88
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 114 publications
(96 citation statements)
references
References 60 publications
7
88
0
Order By: Relevance
“…It is impossible for any system, human or computer, to consider and compare all of the possible hypotheses relevant to a realistic learning problem. Computer scientists and statisticians often use "sampling" to help solve this problem-stochastically selecting some hypotheses rather than others-and there is evidence that people, including young children, do something similar (43)(44)(45).…”
mentioning
confidence: 99%
“…It is impossible for any system, human or computer, to consider and compare all of the possible hypotheses relevant to a realistic learning problem. Computer scientists and statisticians often use "sampling" to help solve this problem-stochastically selecting some hypotheses rather than others-and there is evidence that people, including young children, do something similar (43)(44)(45).…”
mentioning
confidence: 99%
“…For example, Shi, Griffiths, Feldman, & Sanborn (2010) discuss how exemplar models may provide a possible mechanism for implementing Bayesian inference, since these models allow an approximation process called importance sampling. Other examples include the work of Bonawitz et al (2011), who discuss how a simple sequential algorithm can be used to approximate Bayesian inference in a basic causal learning task, and that of Pearl, Goldwater, and Steyvers (2011), who (as described in section 3) investigated various online algorithms for Bayesian models of word segmentation. See also McClelland (1998) for a discussion of how neural network architectures can be used to approximate optimal Bayesian inference (again emphasizing that the connectionist and Bayesian frameworks are not so much in opposition as they are addressing different aspects of the learning problem, with one focusing on the description of the task and the other focusing on the implementation).…”
Section: Algorithmsmentioning
confidence: 99%
“…Another example is Monte Carlo approximation of Bayesian inference (e.g., Denison, Bonawitz, Gopnik, & Griffiths, 2013;Shi, Griffiths, Feldman, & Sanborn, 2010;Bonawitz, Denison, Chen, Gopnik, & Griffiths, 2011), which may show how a learning process could typically produce correct conclusions by approximating (in a bounded way) complex functions using simple evolved capacities (e.g., exemplar-based reasoning).…”
Section: A General Framework For Adaptively Rational Learningmentioning
confidence: 99%