2019
DOI: 10.31234/osf.io/uthw2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using reinforcement learning models in social neuroscience: frameworks, pitfalls, and suggestions of best practices

Abstract: Recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the exam… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 76 publications
(101 reference statements)
0
16
0
Order By: Relevance
“…For α close to 1, the subjective value is strongly updated by the last outcome. Note though that the learning rate α cannot directly be interpreted as a measure of how fast participants understand which of the two symbols is better (see also Zhang et al, 2020). Rather, it describes how strongly subjective values are influenced by the last outcome, regardless of the outcomes that came before.…”
Section: Models For Outcome Evaluationmentioning
confidence: 99%
“…For α close to 1, the subjective value is strongly updated by the last outcome. Note though that the learning rate α cannot directly be interpreted as a measure of how fast participants understand which of the two symbols is better (see also Zhang et al, 2020). Rather, it describes how strongly subjective values are influenced by the last outcome, regardless of the outcomes that came before.…”
Section: Models For Outcome Evaluationmentioning
confidence: 99%
“…However, they may also wish to consult many other reviews and excellent guidelines on the topic ( Daw and Doya, 2006 ; Dayan and Niv, 2008 ; Samson et al. , 2010 ; Daw, 2011 ; Zhang et al. , 2019 ).…”
Section: Introductionmentioning
confidence: 99%
“…A changeable environment requires fast learning guided by recent feedback, whereas a stable environment requires slower learning over time (e.g. [35,36]). Crucially, probabilistic feedback also requires learning to ignore 'misleading' punishment.…”
Section: Introductionmentioning
confidence: 99%