We used adaptive network theory to extend the Rescorla-Wagner (1972) least mean squares (LMS) model of associative learning to phenomena of human learning and judgment. In three experiments subjects learned to categorize hypothetical patients with particular symptom patterns as having certain diseases. When one disease is far more likely than another, the model predicts that subjects will substantially overestimate the diagnosticity of the more valid symptom for the rare disease. The results of Experiments 1 and 2 provide clear support for this prediction in contradistinction to predictions from probability matching, exemplar retrieval, or simple prototype learning models. Experiment 3 contrasted the adaptive network model with one predicting pattern-probability matching when patients always had four symptoms (chosen from four opponent pairs) rather than the presence or absence of each of four symptoms, as in Experiment 1. The results again support the Rescorla-Wagner LMS learning rule as embedded within an adaptive network model.
Learning and memory in humans rely upon several memory systems, which appear to have dissociable brain substrates. A fundamental question concerns whether, and how, these memory systems interact. Here we show using functional magnetic resonance imaging (FMRI) that these memory systems may compete with each other during classification learning in humans. The medial temporal lobe and basal ganglia were differently engaged across subjects during classification learning depending upon whether the task emphasized declarative or nondeclarative memory, even when the to-be-learned material and the level of performance did not differ. Consistent with competition between memory systems suggested by animal studies and neuroimaging, activity in these regions was negatively correlated across individuals. Further examination of classification learning using event-related FMRI showed rapid modulation of activity in these regions at the beginning of learning, suggesting that subjects relied upon the medial temporal lobe early in learning. However, this dependence rapidly declined with training, as predicted by previous computational models of associative learning.
The authors propose a computational theory of the hippocampal region's function in mediating stimulus representations. The theory assumes that the hippocampal region develops new stimulus representations that enhance the discriminability of differentially predictive cues while compressing the representation of redundant cues. Other brain regions, including cerebral and cerebellar cortices, are presumed to use these hippocampal representations to recode their own stimulus representations. In the absence of an intact hippocampal region, the theory implies that other brain regions will attempt to learn associations using previously established fixed representations. Instantiated as a connectionist network model, the theory provides a simple and unified interpretation of the functional role of the hippocampal region in a wide range of conditioning paradigms, including stimulus discrimination, reversal learning, stimulus generalization, latent inhibition, sensory preconditioning, and contextual sensitivity. The theory makes novel predictions regarding the effects of hippocampal lesions on easy-hard transfer and compound preexposure. Several prior qualitative characterizations of hippocampal function--including stimulus selection, chunking, cue configuration, and contextual coding--are identified as task-specific special cases derivable from this more general theory. The theory suggests that a profitable direction for future empirical and theoretical research will be the study of learning tasks in which both intact and lesioned animals exhibit similar initial learning behaviors but differ on subsequent transfer and generalization tasks.
We partially replicate and extend Shepard, Hovland, and Jenkins's (1961) classic study of task difficulty for learning six fundamental types of rule-based categorization problems. Our main results mirrored those of Shepard et aI., with the ordering of task difficulty being the same as in the original study. A much richer data set was collected, however, which enabled the generation of block-by-block learning curves suitable for quantitative fitting. Four current computational models of classification learning were fitted to the learning data: ALCOVE ), the rational model (Anderson, 1991), the configural-cue model (Gluck & Bower, 1988b), and an extended version of the configural-cue model with dimensionalized, adaptive learning rate mechanisms. Although all ofthe models captured important qualitative aspects of the learning data, ALCOVE provided the best overall quantitative fit. The results suggest the need to incorporate some form of selective attention to dimensions in category-learning models based on stimulus generalization and cue conditioning.Recent years have seen an avalanche of newly proposed models of category learning and representation. As such models grow increasingly more sophisticated, there is a need to develop increasingly more rigorous testing grounds so that one may choose among them. Most previous attempts to test alternative models have focused on the end products of categorization by observing patterns of transfer data following an initial learning phase. In the spirit of developing more rigorous tests, there has been a renewed interest in understanding details of the category learning process (see, e.g., Estes, 1986; Estes, Campbell, Hatsopoulos, & Hurwitz, 1989;Nosofsky, Kruschke, & McKinley, 1992). Beyond simply predicting transfer data following the completion of category learning, the fol- lowing question arises: How well can alternative models predict patterns of classification during the entire learning sequence?The purpose of our study was to collect a rich set of classification learning data that would provide a useful testing ground for the numerous models that have been proposed. A seemingly infinite variety of learning paradigms are available, but we hoped to collect some learning data that researchers might regard as fundamental. Although the ultimate goal of categorization researchers is the development of a model that can account for all forms of classification phenomena, it seems worthwhile to focus initial efforts on primary and basic forms of classification learning data.A classic study of category learning is the one reported by Shepard, Hovland, and Jenkins (1961), who studied the difficulty of learning six fundamental types of categorization problems. Their results proved to be highly diagnostic for ruling out various models of classification learning based solely on elementary principles of stimulus generalization and cue conditioning. As will be seen, their data continue to challenge current models of classification learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.