2018
DOI: 10.1371/journal.pcbi.1006043
|View full text |Cite
|
Sign up to set email alerts
|

Rational metareasoning and the plasticity of cognitive control

Abstract: The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

10
133
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 123 publications
(150 citation statements)
references
References 73 publications
(129 reference statements)
10
133
0
Order By: Relevance
“…It is also well established that the experience (Gratton, Coles, & Donchin, 1992) or expectation (Aarts & Roelofs, 2011) of task difficulty enhances control. This influence of reward and difficulty information on control was emphasized in theoretical accounts (Brehm & Self, 1989) and formalized in computational reinforcement learning (RL) models (Lieder, Shenhav, Musslick, & Griffiths, 2018;Silvetti, Vassena, Abrahamse, & Verguts, 2018;Verguts, Vassena, & Silvetti, 2015). Specifically, recent RL models assume that cognitive agents calculate and optimize not just expected reward (i.e., value) but a (linear) combination of reward and difficulty cost (e.g., reward − difficulty cost) in order to decide on whether to invest control or not.…”
Section: Introductionmentioning
confidence: 99%
“…It is also well established that the experience (Gratton, Coles, & Donchin, 1992) or expectation (Aarts & Roelofs, 2011) of task difficulty enhances control. This influence of reward and difficulty information on control was emphasized in theoretical accounts (Brehm & Self, 1989) and formalized in computational reinforcement learning (RL) models (Lieder, Shenhav, Musslick, & Griffiths, 2018;Silvetti, Vassena, Abrahamse, & Verguts, 2018;Verguts, Vassena, & Silvetti, 2015). Specifically, recent RL models assume that cognitive agents calculate and optimize not just expected reward (i.e., value) but a (linear) combination of reward and difficulty cost (e.g., reward − difficulty cost) in order to decide on whether to invest control or not.…”
Section: Introductionmentioning
confidence: 99%
“…amount of control allocated, and a reconfiguration cost that penalizes diverging from the most recent control signal, that is with control cost parameters set to and . Following Lieder et al (2018), the opportunity cost parameter was set to points per second which corresponds to an hourly wage of about $8/hour. The prior distribution on each weight is where and are free parameters that are shared across all weights.…”
Section: The Weightsmentioning
confidence: 99%
“…We base our examination on a recently developed model that describes cognitive control allocation as the result of a cost-benefit analysis, with individuals weighing the expected payoffs for engaging control against the effort-related costs associated with doing so, to determine the overall Expected Value of Control (EVC) for a particular control allocation (Shenhav et al, 2013;Musslick et al, 2015). Building on previous models of strategy learning (Lieder et al, 2017), we recently described a set of learning algorithms that would allow someone to learn EVC through experience performing different tasks (Lieder et al, 2018). According to this Learned Value of Control (LVOC) model, people learn to predict the value of control based on features of the task environment, and they select their control allocation accordingly.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This originally inspired ideas that their output should be combined (8). Recently, rather complex patterns of interaction have been investigated, including MB training of MF (9,10), MF control over MB calculations (11)(12)(13), the incorporation of MF values into MB calculations (14) and, of particular relevance for the present study, the creation of sophisticated, model-dependent, representations of the task that enable MF methods to work more efficiently (15), and potentially less susceptible to distraction (16). We deem these various interactions model-sensitive (MS), saving model-based for the original notion of prospective planning.…”
Section: Introductionmentioning
confidence: 99%