2020
DOI: 10.31234/osf.io/9cvw3
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Role of Executive Function in Shaping Reinforcement Learning

Abstract: Reinforcement learning (RL) models have advanced our understanding of how animals learn and make decisions, and how the brain supports some aspects of learning. However, the neural computations that are explained by RL algorithms fall short of explaining many sophisticated aspects of human decision making, including the generalization of learned information, one-shot learning, and the synthesis of task information in complex environments. Instead, these aspects of instrumental behavior are assumed to be suppor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…However, RL as a computational model of cognition typically assumes a given action space defined by the modeler, which provides the relevant dimensions of the choice space (i.e. either the yogurt color or the cup position) -there is no ambiguity in what choices are, and the nature of the choice space does not matter (Rmus, McDougle, & Collins, 2020). As such, RL experiments in psychology tend to not consider the type of choices (a single motor-action such as pressing a key with the index finger; (A. G. E. Collins, Ciullo, Frank, & Badre, 2017;Tai, Lee, Benavidez, Bonci, & Wilbrecht, 2012), or the selection of a goal stimulus (Foerde & Shohamy, 2011;Daw, Gershman, Seymour, Dayan, & Dolan, 2011;Frank, Moustafa, Haughey, Curran, & Hutchison, 2007)) as important, and researchers use the same models and generalize findings across choice types.…”
Section: Introductionmentioning
confidence: 99%
“…However, RL as a computational model of cognition typically assumes a given action space defined by the modeler, which provides the relevant dimensions of the choice space (i.e. either the yogurt color or the cup position) -there is no ambiguity in what choices are, and the nature of the choice space does not matter (Rmus, McDougle, & Collins, 2020). As such, RL experiments in psychology tend to not consider the type of choices (a single motor-action such as pressing a key with the index finger; (A. G. E. Collins, Ciullo, Frank, & Badre, 2017;Tai, Lee, Benavidez, Bonci, & Wilbrecht, 2012), or the selection of a goal stimulus (Foerde & Shohamy, 2011;Daw, Gershman, Seymour, Dayan, & Dolan, 2011;Frank, Moustafa, Haughey, Curran, & Hutchison, 2007)) as important, and researchers use the same models and generalize findings across choice types.…”
Section: Introductionmentioning
confidence: 99%
“…Our study provides quantitative evidence that a pure reinforcement learning modeling approach does not capture the cognitive processes needed to solve feature-based learning. By formalizing 20 the subcomponent processes needed to augment standard RL modeling we provide strong empirical evidence for the recently proposed 'EF-RL' framework that describes how executive functions (EF) augment RL mechanism during cognitive tasks (Rmus et al, 2020). The framework asserts that RL mechanisms are central for learning a policy to address task challenges, but that attention-, action-, and higher-order expectations are integral for shaping these policies (Rmus et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…By formalizing 20 the subcomponent processes needed to augment standard RL modeling we provide strong empirical evidence for the recently proposed 'EF-RL' framework that describes how executive functions (EF) augment RL mechanism during cognitive tasks (Rmus et al, 2020). The framework asserts that RL mechanisms are central for learning a policy to address task challenges, but that attention-, action-, and higher-order expectations are integral for shaping these policies (Rmus et al, 2020). In our study these 'EF' functions included working memory, adaptive exploration, and an 'attention' mechanism for decaying nonchosen values.…”
Section: Resultsmentioning
confidence: 99%
“…It is often referred to as model-free learning, in contrast to computationally taxing but often more efficient model-based learning, where the agent uses a model of the world to simulate possible paths and downstream outcomes of a decision (Daw et al, 2011), a process thought to depend on the prefrontal cortex (Otto, Gershman, et al, 2013;Smittenaar et al, 2013). Some of the more sophisticated aspects of human decision-making previously unexplained by model-free RL -generalizing across contexts, judging what information is currently relevant, breaking down big goals into subgoals -depend on interactions of canonical RL with cognitive control processes subserved by fronto-parietal, cingulo-opercular networks (Botvinick, 2012; J. F. Cavanagh & Frank, 2014;Collins et al, 2017;Otto, Gershman, et al, 2013;Rmus et al, 2020). On the other hand, cognitive maps that support learning in physical and abstract spaces depend on the hippocampus (Dombrovski et al, 2020;Mattar & Daw, 2018;Miller et al, 2017;Vikbladh et al, 2019).…”
Section: Reinforcement Learning (Sidebar 2)mentioning
confidence: 99%