Repeated performance of visual tasks leads to long-lasting increased sensitivity to the trained stimulus, a phenomenon termed perceptual learning. A ubiquitous property of visual learning is specificity: performance improvement obtained during training applies only for the trained stimulus features, which are thought to be encoded in sensory brain regions [1-3]. However, recent results show performance decrements with an increasing number of trials within a training session [4, 5]. This selective sensitivity reduction is thought to arise due to sensory adaptation [5, 6]. Here we show, using the standard texture discrimination task [7], that location specificity is a consequence of sensory adaptation; that is, it results from selective reduced sensitivity due to repeated stimulation. Observers practiced the texture task with the target presented at a fixed location within a background texture. To remove adaptation, we added task-irrelevant ("dummy") trials with the texture oriented 45° relative to the target's orientation, known to counteract adaptation [8]. The results indicate location specificity with the standard paradigm, but complete generalization to a new location when adaptation is removed. We suggest that adaptation interferes with invariant pattern-discrimination learning by inducing network-dependent changes in local visual representations.
Inflexible behavior is a core characteristic of autism spectrum disorder (ASD), but its underlying cause is unknown. Using a perceptual learning protocol, we observed initially efficient learning in ASD that was followed by anomalously poor learning when the location of the target was changed (over-specificity). Reducing stimulus repetition eliminated over-specificity. Our results indicate that inflexible behavior may be evident ubiquitously in ASD, even in sensory learning, but can be circumvented by specifically designed stimulation protocols.
The short-lasting attenuation of brain oscillations is termed event-related desynchronization (ERD). It is frequently found in the alpha and beta bands in humans during generation, observation, and imagery of movement and is considered to reflect cortical motor activity and action-perception coupling. The shared information driving ERD in all these motor-related behaviors is unknown. We investigated whether particular laws governing production and perception of curved movement may account for the attenuation of alpha and beta rhythms. Human movement appears to be governed by relatively few kinematic laws of motion. One dominant law in biological motion kinematics is the 2/3 power law (PL), which imposes a strong dependency of movement speed on curvature and is prominent in actionperception coupling. Here we directly examined whether the 2/3 PL elicits ERD during motion observation by characterizing the spatiotemporal signature of ERD. ERDs were measured while human subjects observed a cloud of dots moving along elliptical trajectories either complying with or violating the 2/3 PL. We found that ERD within both frequency bands was consistently stronger, arose faster, and was more widespread while observing motion obeying the 2/3 PL. An activity pattern showing clear 2/3 PL preference and lying within the alpha band was observed exclusively above central motor areas, whereas 2/3 PL preference in the beta band was observed in additional prefrontal-central cortical sites. Our findings reveal that compliance with the 2/3 PL is sufficient to elicit a selective ERD response in the human brain.
Spatiotemporal interactions affect visual performance under repeated stimulation conditions, showing both incremental (commonly related to learning) and decremental (possibly sensory adaptation) effects. Here we examined the role of spatiotemporal consistencies on learning dynamics and transfer. The backward-masked texture-discrimination paradigm was used, with stimulus onset asynchrony (SOA) controlling the observers' performance level. Temporal consistencies were examined by modifying the order in which SOA was varied during a training session: gradually reduced SOA (high consistencies) versus randomized SOA (low consistencies). Spatial consistencies were reduced by interleaving standard target trials with oriented 'dummy' trials containing only the background texture (no target, oriented 45° relative to the target's orientation). Our results showed reduced improvement following training with gradual SOA, as compared with random SOA. However, this difference was eliminated by randomizing SOA only at the initial and final segments of training, revealing a contaminating effect of temporal consistencies on threshold estimation rather than on learning. Inserting the 'dummy' trials (reduced spatial consistencies) facilitated both the learning and the subsequent transfer of learning, but only when sufficient pre-training was provided. These results indicate that visual sensitivity depends on a balance between two opposing processes, perceptual learning and sensory adaptation, both of which depend on spatiotemporal consistencies. Reducing spatiotemporal consistencies during training reduces the short-term spatiotemporal interactions that interfere with threshold estimation, learning, and generalization of learning. We consider the results within a theoretical framework, assuming an adaptable low-level network and a readout mechanism, with orientation and location-specific low-level adaptation interfering with the readout learning.
Sensory adaptation and perceptual learning are two forms of plasticity in the visual system, with some potential overlapping neural mechanisms and functional benefits. However, they have been largely considered in isolation. Here we examined whether extensive perceptual training with oriented textures (texture discrimination task, TDT) induces adaptation tilt aftereffects (TAE). Texture elements were oriented lines at -22.5° (target) and 22.5° (background). Observers were trained in 5 daily sessions on the TDT, with 800-1000trials/session. Thresholds increased within the daily sessions, showing within-session performance deterioration, but decreased between days, showing learning. To evaluate TAE, perceived vertical (0°) was measured prior to and after each daily session using a single line element. The results showed a TAE of ∼1.5° at retinal locations consistently stimulated by the target, but none at locations consistently stimulated by the background texture. Retinal locations equally stimulated by target and background elements showed a significant TAE (∼0.7°), in a direction expected by target-driven sensory adaptation. Moreover, these locations showed increasing TAE persistence with training. Additional experiments with a modified target, in order to have balanced stimulation around the vertical direction in all target locations, confirmed the locality of the task-dependent TAE. The present results support a strong link between perceptual learning and local orientation-selective adaptation leading to TAE; the latter was shown here to be task and experience dependent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.