2018
DOI: 10.31219/osf.io/pm4au
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Win-Stay, Lose-Sample: A Simple Sequential Algorithm for Approximating Bayesian Inference

Abstract: People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 52 publications
2
9
0
Order By: Relevance
“…As with Bramley et al, Bonawitz et al (2014) found that children and adults' online structure judgments exhibited sequential dependence. To account for this they proposed an account of how causal learners might rationally reduce the computational effort of continually reconsidering their model.…”
Section: Behavioral Patterns and Existing Explanationssupporting
confidence: 55%
See 2 more Smart Citations
“…As with Bramley et al, Bonawitz et al (2014) found that children and adults' online structure judgments exhibited sequential dependence. To account for this they proposed an account of how causal learners might rationally reduce the computational effort of continually reconsidering their model.…”
Section: Behavioral Patterns and Existing Explanationssupporting
confidence: 55%
“…We first formalize causal model inference at the computational level. We then highlight the ways in which past experiments have shown human learning to diverge from the predictions of this idealized account, using these to motivate two causal judgment heuristics proposed in the literature: simple endorsement (Bramley, Lagnado, & Speekenbrink, 2015;Fernbach & Sloman, 2009) and win-stay, lose-sample (Bonawitz, Denison, Gopnik, & Griffiths, 2014) before developing out own Neurath's ship framework for belief change and active learning.…”
Section: Models Of Human Causal Learning Based On Bayesian Network Hmentioning
confidence: 99%
See 1 more Smart Citation
“…The state-dependent teaching model is also found to be consistent with simple human learning models in cognitive science, including the "win-stay lose-shift" model [8], [9] (e.g., when σ(h ; h) = 0 if h = h and 1 otherwise, the learner prefers to stay at the same hypothesis if it is consistent with the observed data).…”
Section: ) Teaching Finishes If the Learner's Updated Hypothesissupporting
confidence: 66%
“…We begin by introducing the option learning and function learning models of human learning, followed by a discussion of potential sampling strategies. Lastly, we also consider several heuristics that do not build a representation of the world (i.e., no predictions about reward), but also capture aspects of human search behavior (Bonawitz, Denison, Gopnik, & Griffiths, 2014;Raichlen et al, 2014).…”
Section: Models Of Human Learningmentioning
confidence: 99%