2021
DOI: 10.7554/elife.68943
|View full text |Cite
|
Sign up to set email alerts
|

Value signals guide abstraction during learning

Abstract: The human brain excels at constructing and using abstractions, such as rules, or concepts. Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 91 publications
(148 reference statements)
2
15
0
Order By: Relevance
“…The decay mechanism allows their value to decay down to zero despite not being chosen (otherwise, the model updates only the values of chosen features). Note that, this feature-based RL model, although simple, is well suited to the additive reward structure of the task, and provides a better fit than more complex RL models, such a conjunction-based RL model [ 22 ] or an Expert RL model that combines a few RL “experts” each learning different combinations of the dimensions [ 23 ].…”
Section: Resultsmentioning
confidence: 99%
“…The decay mechanism allows their value to decay down to zero despite not being chosen (otherwise, the model updates only the values of chosen features). Note that, this feature-based RL model, although simple, is well suited to the additive reward structure of the task, and provides a better fit than more complex RL models, such a conjunction-based RL model [ 22 ] or an Expert RL model that combines a few RL “experts” each learning different combinations of the dimensions [ 23 ].…”
Section: Resultsmentioning
confidence: 99%
“…As a small step in this direction, we plan to computationally simulate a simplified CRMN as a MoEXP architecture. To this end, we will model participant learning behaviors observed in Cortese et al (2021b). In that experiment, participants were able to learn an abstract representation for reinforcement learning from a small sample.…”
Section: Discussionmentioning
confidence: 99%
“…We recently investigated how humans learn to solve decision problems based on abstractions [23]. In the task, hidden rules defined what information was relevant or irrelevant.…”
Section: Box 1 How Artificial Intelligence (Ai) Agents Learn From Mul...mentioning
confidence: 99%
“…But given that stimuli are composed of many features/dimensions, a more serious concern in value-based choice is the possibility of compressing the wrong dimensions. If the abstraction is built at the wrong level, such as overly simple features, then the process becomes slow and inefficient [23,32]. We will come back to this problem later and discuss putative mechanisms that can monitor the reliability of the current abstraction given the agent's overarching goal(s).…”
Section: Box 1 How Artificial Intelligence (Ai) Agents Learn From Mul...mentioning
confidence: 99%