2020
DOI: 10.1037/xge0000747
|View full text |Cite
|
Sign up to set email alerts
|

The influence of reward magnitude on stimulus memory and stimulus generalization in categorization decisions.

Abstract: Reward magnitude is a central concept in most theories of preferential decision making and learning. However, it is unknown whether variable rewards also influence cognitive processes when learning how to make accurate decisions (e.g., sorting healthy and unhealthy food differing in appeal). To test this, we conducted 3 studies. Participants learned to classify objects with 3 feature dimensions into two categories before solving a transfer task with novel objects. During learning, we rewarded all correct decis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 147 publications
(377 reference statements)
1
7
0
Order By: Relevance
“…This theoretically commits CAL to the idea that abstraction is mainly driven by the rule-learning network, and strong memorization is more akin to stimulus identification. In other words, in exemplar models (e.g., GCM; Nosofsky, 1986), if the memory strength parameter of an exemplar becomes stronger, an increase in its recall accuracy is predicted, while a decrease in accuracy for exemplars from other categories is also predicted (see also Hendrickson et al, 2019; Homa et al, 2019; Schlegelmilch & von Helversen, 2020), similar to a recall bias. In CAL, increasing the memory strength of a stored instance increases its recall accuracy and decreases its interfering influence on category inferences for dissimilar instances.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This theoretically commits CAL to the idea that abstraction is mainly driven by the rule-learning network, and strong memorization is more akin to stimulus identification. In other words, in exemplar models (e.g., GCM; Nosofsky, 1986), if the memory strength parameter of an exemplar becomes stronger, an increase in its recall accuracy is predicted, while a decrease in accuracy for exemplars from other categories is also predicted (see also Hendrickson et al, 2019; Homa et al, 2019; Schlegelmilch & von Helversen, 2020), similar to a recall bias. In CAL, increasing the memory strength of a stored instance increases its recall accuracy and decreases its interfering influence on category inferences for dissimilar instances.…”
Section: Discussionmentioning
confidence: 99%
“…CATEGORY ABSTRACTION LEARNING 29 in its recall accuracy is predicted, while a decrease in accuracy for exemplars from is also predicted (see also Hendrickson et al, Homa et al, 2019;Schlegelmilch & von Helversen, 2020), similar to a In CAL, increasing the memory strength of a stored instance increases its recall accuracy and decreases its interfering influence on category inferences for dissimilar instances.…”
Section: Synthesizing Rules and Memory-based Inferencementioning
confidence: 92%
“…However, the formal concepts of exemplar theory define memory strength as the degree by which each exemplar is integrated into overall similarity. In this sense, if the memory strength of an exemplar becomes stronger, an increase in its recall accuracy is predicted, while a decrease in accuracy of recalling exemplars from other categories is predicted, as well (see also Hendrickson et al, 2019;Homa et al, 2019;Schlegelmilch & von Helversen, 2020), similar to a recall bias. In the current version of CAL the increase in memory strength means an increase in the association between a stored instance and a category label, which does not affect how likely the retrieval of that instance or other instances is.…”
Section: Synthesizing Rules and Memory-based Inferencementioning
confidence: 99%
“…In this article, we introduce a hierarchical Bayesian version of the RulEx-J model since the hierarchical Bayesian modeling framework offers many advantages and has therefore become a very popular tool for estimating latent parameters of cognitive models (e.g., Bott et al, 2020; Mattes et al, 2020; Schlegelmilch & von Helversen, 2020; Schubert et al, 2019; for general introductions, see Lee, 2018; McElreath, 2020; Rouder et al, 2018). For instance, the hierarchical structure of the model naturally reflects the hierarchical data structure of many experiments, where several participants perform multiple trials of the same task and it is the aim of the researcher to draw conclusions on the group level (e.g., Steingroever et al, 2018).…”
Section: Rulex-jmentioning
confidence: 99%