2018
DOI: 10.1016/j.neuroimage.2018.01.084
|View full text |Cite
|
Sign up to set email alerts
|

Rule activation and ventromedial prefrontal engagement support accurate stopping in self-paced learning

Abstract: When weighing evidence for a decision, individuals are continually faced with the choice of whether to gather more information or act on what has already been learned. The present experiment employed a self-paced category learning task and fMRI to examine the neural mechanisms underlying stopping of information search and how they contribute to choice accuracy. Participants learned to classify triads of face, object, and scene cues into one of two categories using a rule based on one of the stimulus dimensions… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 60 publications
(95 reference statements)
3
10
0
Order By: Relevance
“…As anticipated, a linear mixed effects model collapsed across trial type revealed that pattern similarity to the visual category was the strongest for perfectly predictive features (M = .065), followed by the non-predictive but present features (M = −0.050) and the non-present features (M = −0.145), ( F (2, 42)=54.8, p<0.001). This finding, whereby activation patterns elicited for stimuli during learning are most similar to predictive features, is consistent with recent studies using MVPA to measure dimensional selective attention in categorization and reinforcement learning ( Mack et al, 2013 , Mack et al, 2016 ; Leong et al, 2017 ; O'Bryan et al, 2018 ). For common trials, pairwise comparisons revealed significant differences between pattern similarity to perfect and imperfect predictors ( t (21)=3.38, p=0.003), perfect predictors and non-present features ( t (21)=5.71, p<0.001), and between imperfect predictors and non-present features ( t (21)=4.27, p<0.001).…”
Section: Resultssupporting
confidence: 90%
See 3 more Smart Citations
“…As anticipated, a linear mixed effects model collapsed across trial type revealed that pattern similarity to the visual category was the strongest for perfectly predictive features (M = .065), followed by the non-predictive but present features (M = −0.050) and the non-present features (M = −0.145), ( F (2, 42)=54.8, p<0.001). This finding, whereby activation patterns elicited for stimuli during learning are most similar to predictive features, is consistent with recent studies using MVPA to measure dimensional selective attention in categorization and reinforcement learning ( Mack et al, 2013 , Mack et al, 2016 ; Leong et al, 2017 ; O'Bryan et al, 2018 ). For common trials, pairwise comparisons revealed significant differences between pattern similarity to perfect and imperfect predictors ( t (21)=3.38, p=0.003), perfect predictors and non-present features ( t (21)=5.71, p<0.001), and between imperfect predictors and non-present features ( t (21)=4.27, p<0.001).…”
Section: Resultssupporting
confidence: 90%
“…These results are consistent with findings from other model-based fMRI studies suggesting that the MTL is involved in similarity-based retrieval ( Davis et al, 2012a , Davis et al, 2012b ). Likewise, the engagement of vmPFC corroborates recent studies suggesting that this region tracks higher relative evidence for categorization decisions ( Davis et al, 2017 ; O'Bryan et al, 2018 ). The positive relationship between vmPFC and similarity processes may also be reflective of attention to strong predictors ( Sharpe and Killcross, 2015 ; Nasser et al, 2017 ) or the application of familiar category rules ( Boettiger and D'Esposito, 2005 ; Liu et al, 2015 ), both of which are consistent with similarity-based accounts of IBRE that attribute choice to a well-established association between perfect predictors and their outcomes that is driven by attention (e.g.…”
Section: Resultssupporting
confidence: 86%
See 2 more Smart Citations
“…Previous studies have relied on contrastive analyses in which neural representations of attended stimulus dimensions are compared to those of unattended dimensions. Although statistically-powerful, this approach defines selective attention in terms of the experimental paradigm (but see O’Bryan et al, 2018), and therefore sidesteps effects associated with individual differences in conceptual knowledge (e.g., Craig & Lewandowsky, 2012; Little & McDaniel, 2015; McDaniel et al, 2014; Raijmakers et al, 2014). These effects can be substantial, particularly for ill-defined categorization-problems (such as the 5/4 categorization task) that are common in every-day life (Hedge et al, 2017; Johansen & Palmeri, 2002).…”
Section: Introductionmentioning
confidence: 99%