The findings of Shepard, Hovland, and Jenkins (1961) on the relative ease of learning 6 elemental types of 2-way classifications have been deeply influential 2 times over: 1st, as a rebuke to pure stimulus generalization accounts, and again as the leading benchmark for evaluating formal models of human category learning. The litmus test for models is the ability to simulate an observed advantage in learning a category structure based on an exclusive-or (XOR) rule over 2 relevant dimensions (Type II) relative to category structures that have no perfectly predictive cue or cue combination (including the linearly-separable Type IV). However, a review of the literature reveals that a Type II advantage over Type IV is found only under highly specific experimental conditions. We investigate when and why a Type II advantage exists to determine the appropriate benchmark for models and the psychological theories they represent. A series of 8 experiments link particular conditions of learning to outcomes ranging from a traditional Type II advantage to compelling non-differences and reversals (i.e., Type IV advantage). Common interpretations of the Type II advantage as either a broad-based phenomenon of human learning or as strong evidence for an attention-mediated similarity-based account are called into question by our findings. Finally, a role for verbalization in the category learning process is supported.
In a recent article, J. P. Minda and J. D. Smith (2002) argued that an exemplar model provided worse quantitative fits than an alternative prototype model to individual subject data from the classic D. L. Medin and M. M. Schaffer (1978) 5/4 categorization paradigm. In addition, they argued that the exemplar model achieved its fits by making untenable assumptions regarding how observers distribute their attention. In this article, we demonstrate that when the models are equated in terms of their response-rule flexibility, the exemplar model provides a substantially better account of the categorization data than does a prototype or mixed model. In addition, we point to shortcomings in the attention-allocation analyses conducted by J. P. Minda and J. D. Smith (2002). When these shortcomings are corrected, we find no evidence that challenges the attention-allocation assumptions of the exemplar model.A classic issue in the categorization literature has been whether people represent categories in terms of abstracted prototypes or in terms of specific exemplars. According to prototype models, people represent categories in terms of some central tendency computed over the category training instances and classify objects on the basis of how similar they are to the prototypes of the alternative categories (Homa & Vosburgh, 1976;Posner & Keele, 1968;Reed, 1972). By contrast, according to exemplar models, people represent categories by storing the individual training instances themselves (Hintzman, 1986;Medin & Schaffer, 1978;Nosofsky, 1986).A well-known experimental paradigm that has been used for contrasting the predictions of exemplar and prototype models is the Medin and Schaffer (1978) 5/4 category structure, which is listed in Table 1. 1 In this paradigm, the stimuli are simple perceptual forms that vary along four salient binary-valued dimensions. The stimuli are divided into two categories. The logical values of the prototype of Category A are assumed to be 0 0 0 0, and the logical values of the prototype of Category B are assumed to be 1 1 1 1. Subjects are trained on the first nine items and are then given a transfer test that includes all the items in the list. This category structure is diagnostic because prototype and exemplar models tend to make opposite predictions for specific items. Most critically, prototype models predict that people will perform better on Stimulus A1 than on Stimulus A2 because A1 shares more features with its category prototype. In contrast, exemplar models generally predict an A2 advantage because A2 is highly similar to (i.e., shares three features with) two Category A exemplars and no Category B exemplars. In fact, the A2 advantage has been observed in numerous studies. Furthermore, when exemplar and prototype models are fitted to the classification data in this design, the results generally favor the predictions from the exemplar model (for reviews, see Nosofsky, 1992Nosofsky, , 2000 but see Smith & Minda, 2000, for an opposing viewpoint).However, whereas previous research concentr...
Speeded perceptual classification experiments were conducted to distinguish among the predictions of exemplar-retrieval, decision-boundary, and prototype models. The key manipulation was that across conditions, individual stimuli received either probabilistic or deterministic category feedback. Regardless of the probabilistic feedback, however, an ideal observer would always classify the stimuli by using an identical linear decision boundary. Subjects classified the probabilistic stimuli with lower accuracy and longer response times than they classified the deterministic stimuli. These results are in accord with the predictions of the exemplar model and challenge the predictions of the prototype and decision-boundary models.A fundamental issue in the field of perceptual classification concerns the manner in which people represent categories in memory and the decision processes that they use for making classification judgments. Among the major formal models of perceptual classification are exemplar-retrieval, prototype, and decision-boundary models. According to exemplar-retrieval models (Hintzman, 1986;Medin & Schaffer, 1978;Nosofsky, 1986), people represent categories by storing individual exemplars of categories in memory, and they make classification decisions on the basis of the similarity of test items to these stored exemplars. According to prototype models (Posner & Keele, 1968;Reed, 1972;Smith, Murray, & Minda, 1997), a category representation consists of an idealized prototype, usually assumed to be the central tendency of the category training exemplars. And according to decision-boundary models (Ashby & Townsend, 1986), people use decision boundaries for dividing a multidimensional psychological space into category-response regions. These boundaries can correspond either to simple, verbalizable rules or to complex, nonverbalizable ones. Hybrid or multiple-system models have also been proposed that involve combinations of these types of representations and decision processes (Anderson & Betz, 2001;Ashby, Alfonso-Reese, Turken, & Waldron, 1998;Erickson & Kruschke, 1998;Nosofsky, Palmeri, & McKinley, 1994;Vandierendonck, 1995). However, the research reported in this article sought to develop contrasts among the predictions of the singlesystem models.One of the emerging themes in the perceptual classification literature has been to test formal models not only on their ability to predict classification choice probabilities but on their ability to account for the actual time course of classification decision making (Anderson & Betz, 2001;Ashby, Boynton, & Lee, 1994;Ashby & Maddox, 1994;Cohen & Nosofsky, 2003;Lamberts, 1995Lamberts, , 1998Lamberts, , 2000Maddox & Ashby, 1996;Nosofsky & Palmeri, 1997a, 1997bRatcliff & Rouder, 1998;Verguts, Storms, & Tuerlinckx, 2003). Thus, versions of the models have been developed that predict classification response times (RTs). We pursue this theme in the present article. Specifically, the purpose of this research was to conduct experiments to distinguish among the predic...
Researchers have argued that an implicit procedural-learning system underlies performance for information integration category structures, whereas a separate explicit system underlies performance for rule-based categories. One source of evidence is a dissociation in which procedural interference harms performance in information integration structures, but not in rule-based ones. The present research provides evidence that some form of overall difficulty or category complexity lies at the root of the dissociation. The authors report studies in which procedural interference is observed for even simple rule-based structures under more sensitive testing conditions. Furthermore, the magnitude of the interference is large when the nature of the rule is made more complex. By contrast, the magnitude of interference is greatly reduced for an information integration structure that is cognitively simple. These results challenge the view that a procedural-learning system mediates performance on information integration categories, but not on rule-based ones.
. Within the class of singlesystem models, a variety of representational formats exist; however, most multiple-system models propose that category learning is mediated by an explicit system and at least one implicit system. The COVIS (competition between a verbal and an implicit system) model (Ashby et al., 1998) is a prominent theory in category learning and is representative of the class of models that posit multiple systems. According to COVIS, there are two systems responsible for category learning: an explicit system that tests hypotheses on the basis of verbalizable rules, and an implicit system that is mediated by procedural learning. The explicit system relies on access to working memory and executive attention, using them to store and assess candidate rules. On each trial, a response is executed on the basis of the current rule. The observer continues to use the rule until corrective feedback suggests that the rule may be incorrect, and then the observer decides to either maintain the rule or search for a new rule. Given the latter option, a new rule must be selected and attention switched to this rule, and these processes require both time and effort. However, the implicit system is based on a procedural learning system, dependent on dopamine reward signals, that is updated automatically.Recently, Maddox, Ashby, Ing, and Pickering (2004) sought to support the assumptions of the explicit and implicit learning components of COVIS. These researchers distinguish between two fundamental category types: rule-based categories and information-integration categories. Rule-based categorization problems are those in which it is easy for an observer to verbalize the optimal strategy. The observer selectively attends to each dimension, decides on the category regions along each of the component dimensions, and formulates a rule to determine category membership. An example is Maddox et al.'s unidimensional category structure, depicted in Figure 1A. The stimuli are sine wave gratings that vary in spatial frequency (perceived as bar width) and angle, and the vertical line represents the optimal decision bound for separating the categories. In this example, only spatial frequency determines category membership, and the angle of the gratings is irrelevant. Any percept left of the decision bound belongs to Category A, and any percept right of the decision bound belongs to Category B. In general, for rule-based categories, decisions about the percept's value along each dimension are made first, and then these separate decisions are combined to generate a response. In the present case, only a single decision is necessary, and the simple verbal rule corresponds to "Respond A if the bars are wide and respond B if the bars are thin." 1747Copyright 2007 Researchers have argued that different categorization problems are learned by separate and distinct cognitive systems. They propose that an explicit system is responsible for learning rule-based categories and that a separate implicit system learns information-integration cat...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.