2020
DOI: 10.1101/2020.10.21.348938
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Executive function supports single-shot endowment of value to arbitrary transient goals

Abstract: Recent evidence suggests that executive processes shape reinforcement learning (RL) computations. Here, we extend this idea to the processing of choice outcomes, asking if executive function and RL interact during learning from novel goals. We designed a task where people learned from familiar rewards or abstract instructed goals. We hypothesized that learning from these goals would produce reliable responses in canonical reward circuits, and would do so by leveraging executive function. Behavioral results poi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 89 publications
(139 reference statements)
0
1
0
Order By: Relevance
“…For example, WM may maintain reward information itself (deviating from traditional theories in which reward information is only stored by RL processes): in the PFC-BG model developed by (Zhao et al, 2018), dopaminergic signals update both basal ganglia and prefrontal cortex, where reward information is encoded and updated in WM. Similarly, recent imaging work shows that WM helps transform novel goal stimuli into a signal the brain interprets as rewards for learning (McDougle et al, 2021). WM may assist RL by representing more abstract task-relevant information, to allow for generalization across tasks (Williams & Phillips, 2020), or by effectively lowering the set of states or actions RL operates over by filtering out irrelevant state spaces (Rmus et al, 2021).…”
Section: %And#$#!"mentioning
confidence: 99%
“…For example, WM may maintain reward information itself (deviating from traditional theories in which reward information is only stored by RL processes): in the PFC-BG model developed by (Zhao et al, 2018), dopaminergic signals update both basal ganglia and prefrontal cortex, where reward information is encoded and updated in WM. Similarly, recent imaging work shows that WM helps transform novel goal stimuli into a signal the brain interprets as rewards for learning (McDougle et al, 2021). WM may assist RL by representing more abstract task-relevant information, to allow for generalization across tasks (Williams & Phillips, 2020), or by effectively lowering the set of states or actions RL operates over by filtering out irrelevant state spaces (Rmus et al, 2021).…”
Section: %And#$#!"mentioning
confidence: 99%