Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through locationindependent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system. reweighting models | Hebbian models A lmost all perceptual tasks exhibit perceptual learning, improving people's ability to detect, discriminate, or identify visual stimuli. These improvements due to practice are the basis of visual expertise. Practice improves the ability to perceive orientation, spatial frequency, patterns and texture, motion direction, and other stimulus features (1-4). Learned perceptual improvements generally show some specificity to the feature and to the retinal location of training. Specificity of trained improvements to retinal location and feature in behavioral studies of texture orientation (5, 6) or simple pattern orientation judgments (7,8) inspired early researchers to posit that practice altered the responses of early visual representations (V1/V2) with small receptive fields, retinotopic structure, and relatively narrow orientation and spatial frequency tuning (6).However, the generalization of learned perceptual skills over retinal locations is almost always practically advantageous, and is sometimes observed (9). Whether perceptual learning reflects changes in retinotopic representations in early visual cortical areas (6) or alternatively-as we have suggested elsewhere-is primarily acco...
Feedback plays an interesting role in perceptual learning. The complex pattern of empirical results concerning the role of feedback in perceptual learning rules out both a pure supervised mode and a pure unsupervised mode of learning and leads some researchers to the proposal that feedback may change the learning rate through top-down control but does not act as a teaching signal in perceptual learning (M. H. Herzog & M. Fahle, 1998). In this study, we tested the predictions of an augmented Hebbian reweighting model (AHRM) of perceptual learning (A. Petrov, B. A. Dosher, & Z.-L. Lu, 2005), in which feedback influences the effective rate of learning by serving as an additional input and not as a direct teaching signal. We investigated the interactions between feedback and training accuracy in a Gabor orientation identification task over six training days. The accelerated stochastic approximation method was used to track threshold contrasts at particular performance accuracy levels throughout training. Subjects were divided into 4 groups: high training accuracy (85% correct) with and without feedback, and low training accuracy (65%) with and without feedback. Contrast thresholds improved in the high training accuracy condition, independent of the feedback condition. However, thresholds improved in the low training accuracy condition only in the presence of feedback but not in the absence of feedback. The results are both qualitatively and quantitatively consistent with the predictions of the augmented Hebbian learning model and are not consistent with pure supervised error correction or pure Hebbian learning models.
In this study, we investigated whether mixing easy and difficult trials can lead to learning in the difficult conditions. We hypothesized that while feedback is necessary for significant learning in training regimes consisting solely of low training accuracy trials, training mixtures with sufficient proportions of high accuracy training trials would lead to significant learning without feedback. Thirty-six subjects were divided into one experimental group in which trials with high training accuracy were mixed with those with low training accuracy and no feedback, and five control groups in which high and low accuracy training were mixed in the presence of feedback; high and high training accuracy were mixed or low and low training accuracy were mixed with and without feedback trials. Contrast threshold improved significantly in the low accuracy condition in the presence of high training accuracy trials (the high-low mixture group) in the absence of feedback, although no significant learning was found in the low accuracy condition in the group with the low-low mixture without feedback. Moreover, the magnitude of improvement in low accuracy trials without feedback in the high-low training mixture is comparable to that in the high accuracy training without feedback condition and those obtained in the presence of trial-by-trial external feedback. The results are both qualitatively and quantitatively consistent with the predictions of an augmented Hebbian learning model. We conclude that mixed training at high and low accuracy levels can lead to perceptual learning at low training accuracy levels without feedback.
Using the external noise plus training paradigm, we have consistently found that two independent mechanisms, stimulus enhancement and external noise exclusion, support perceptual learning in a range of tasks. Here, we show that re-weighting of stable early sensory representations through Hebbian learning (Petrov et al., 2005, 2006) can generate performance patterns that parallel a large range of empirical data: (1) perceptual learning reduced contrast thresholds at all levels of external noise in peripheral orientation identification (Dosher & Lu, 1998, 1999), (2) training with low noise exemplars transferred to performance in high noise, while training with exemplars embedded in high external noise transferred little to performance in low noise (Dosher & Lu, 2005), and (3) pre-training in high external noise only reduced subsequent learning in high external noise, whereas pre-training in zero external noise left very little additional learning in all the external noise conditions (Lu et al., 2006). In the augmented Hebbian re-weighting model (AHRM), perceptual learning strengthens or maintains the connections between the most closely tuned visual channels and a learned categorization structure, while it prunes or reduces inputs from task-irrelevant channels. Reducing the weights on irrelevant channels reduces the contributions of external noise and additive internal noise. Manifestation of stimulus enhancement or external noise exclusion depends on the initial state of internal noise and connection weights in the beginning of a learning task. Both mechanisms reflect re-weighting of stable early sensory representations.
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle 1997, Vision Research, 37 (15), 2133–2141), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.