2017
DOI: 10.1038/s41598-017-17687-2
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Feature Dimensions to Learn a New Policy in an Uninformed Reinforcement Learning Task

Abstract: When making a choice with limited information, we explore new features through trial-and-error to learn how they are related. However, few studies have investigated exploratory behaviour when information is limited. In this study, we address, at both the behavioural and neural level, how, when, and why humans explore new feature dimensions to learn a new policy for choosing a state-space. We designed a novel multi-dimensional reinforcement learning task to encourage participants to explore and learn new featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 44 publications
(86 reference statements)
3
3
0
Order By: Relevance
“…This stands in contrast to reinforcement learning models that require definition of an appropriate state space for each task. A simplicity bias is also consistent with findings that suggest that trialand-error learning follows a pattern whereby simpler feature-based state spaces precede more complex object-based spaces [5,58], and explains why classification becomes harder as the number of relevant dimensions grows [59]. The findings outlined in this section illustrate both the importance of structured knowledge in learning and the utility of Bayesian cognitive models for explaining how this knowledge is acquired.…”
Section: Insights From Structure Learningsupporting
confidence: 83%
“…This stands in contrast to reinforcement learning models that require definition of an appropriate state space for each task. A simplicity bias is also consistent with findings that suggest that trialand-error learning follows a pattern whereby simpler feature-based state spaces precede more complex object-based spaces [5,58], and explains why classification becomes harder as the number of relevant dimensions grows [59]. The findings outlined in this section illustrate both the importance of structured knowledge in learning and the utility of Bayesian cognitive models for explaining how this knowledge is acquired.…”
Section: Insights From Structure Learningsupporting
confidence: 83%
“…This is equivalent to a prior over the hypothesis space, favouring hypotheses with relatively few features. Interestingly, this is consistent with findings in neuroscience that people tend to make decisions based on individual features before reasoning about objects that involve more complex combinations of features ( Choung et al, 2017 , Farashahi et al, 2017 ). Similarly, people find it harder to perform classification tasks as the number of relevant dimensions increases ( Shepard, Hovland, & Jenkins, 1961 ).…”
Section: Discussionsupporting
confidence: 89%
“…This introduces a prior over the hypothesis space, favouring hypotheses with relatively few features. This is consistent with findings that people tend to make decisions based on individual features before reasoning about objects that involve more complex combinations of features (Farashahi et al, 2017;Choung et al, 2017). Similarly, people find it harder to perform classification tasks as the number of relevant dimensions increases (Shepard et al, 1961).…”
Section: Discussionsupporting
confidence: 87%