Summary Practice improves discrimination of many basic visual features, such as contrast, orientation, positional offset, etc. [1–7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6–8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6–11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature (“feature learning”), and learning to deal with stimulus non-specific factors like local noise at the stimulus location (“location learning”) [12]. Therefore, learning is not transferable to a new location that has never been location-trained. To test this hypothesis we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g. orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual learning models, and suggests perceptual learning involves higher non-retinotopic brain areas that enable location transfer.
Visual perceptual learning models, as constrained by orientation and location specificities, propose that learning either reflects changes in V1 neuronal tuning or reweighting specific V1 inputs in either the visual cortex or higher areas. Here we demonstrate that, with a training-plus-exposure procedure, in which observers are trained at one orientation and either simultaneously or subsequently passively exposed to a second transfer orientation, perceptual learning can completely transfer to the second orientation in tasks known to be orientation-specific. However, transfer fails if exposure precedes the training. These results challenge the existing specific perceptual learning models by suggesting a more general perceptual learning process. We propose a rule-based learning model to explain perceptual learning and its specificity and transfer. In this model, a decision unit in high-level brain areas learns the rules of reweighting the V1 inputs through training. However, these rules cannot be applied to a new orientation/location because the decision unit cannot functionally connect to the new V1 inputs that are unattended or even suppressed after training at a different orientation/location, which leads to specificity. Repeated orientation exposure or location training reactivates these inputs to establish the functional connections and enable the transfer of learning.
The findings indicate that visual acuity assessment in Chinese readers is complicated by the spatial complexity of Chinese characters, but the fact that the Snellen E, which is the current national standard of acuity measurement in China, and Chinese characters showed similar dependence on optical defocus may indicate a potentially valid way to infer functional vision in Chinese readers with Snellen E acuity.
Written Chinese is distinct from alphabetic languages because of its enormous number of characters with a great range of spatial complexities (stroke numbers). In this study we investigated the impact of spatial complexity on legibility of Chinese characters as well as associated crowding in peripheral vision. Our results showed that for isolated characters, threshold sizes of complex characters increased faster with retinal eccentricity than did those of simple characters, suggesting possible "within-character" crowding among parts of complex Chinese characters. However, such "within-character" crowding was rendered negligible by strong "between-character" crowding introduced by flankers. When the target and flankers belonged to different complexity groups, the intensity and extent of crowding were greatly reduced, which could be explained by top-down influences as well as lower-level mechanisms. We suggest that crowding can be attributed to multiple mechanisms at different levels of visual processing.
Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that "double training" enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This "piggybacking" effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be "piggybacked" by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown.
Location-specific perceptual learning can be rendered transferrable to a new location with double training, in which feature training (e.g., contrast) is accompanied by additional location training at the new location even with an irrelevant task (e.g. orientation). Here we investigated the impact of relevancy (to feature training) and demand of location training tasks on double training enabled learning transfer. We found that location training with an irrelevant task (Gabor vs. letter judgment, or contrast discrimination) limited transfer of Vernier learning to the trained orientation only. However, performing a relevant suprathreshold orthogonal Vernier task prompted additional transfer to an untrained orthogonal orientation. In addition, the amount of learning transfer may depend on the demand of location training as well as the double training procedure. These results characterize how double training potentiates the functional connections between a learned high-level decision unit and visual inputs from an untrained location to enable transfer of learning across retinal locations.
Perceptual learning of visual features occurs when multiple stimuli are presented in a fixed sequence (temporal patterning), but not when they are presented in random order (roving). This points to the need for proper stimulus coding in order for learning of multiple stimuli to occur. We examined the stimulus coding rules for learning with multiple stimuli. Our results demonstrate that: (1) stimulus rhythm is necessary for temporal patterning to take effect during practice; (2) learning consolidation is subject to disruption by roving up to 4 h after each practice session; (3) importantly, after completion of temporal-patterned learning, performance is undisrupted by extended roving training; (4) roving is ineffective if each stimulus is presented for five or more consecutive trials; and (5) roving is also ineffective if each stimulus has a distinct identity. We propose that for multi-stimulus learning to occur, the brain needs to conceptually “tag” each stimulus, in order to switch attention to the appropriate perceptual template. Stimulus temporal patterning assists in tagging stimuli and switching attention through its rhythmic stimulus sequence.
The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.