In studies of perceptual learning (PL), subjects are typically highly trained across many sessions to achieve perceptual benefits on the stimuli in those tasks. There is currently significant debate regarding what sources of brain plasticity underlie these PL-based learning improvements. Here we investigate the hypothesis that PL, among other mechanisms, leads to task automaticity, especially in the presence of the trained stimuli. To investigate this hypothesis, we trained participants for eight sessions to find an oriented target in a field of near-oriented distractors and examined alpha-band activity, which modulates with attention to visual stimuli, as a possible measure of automaticity. Alpha-band activity was acquired via electroencephalogram (EEG), before and after training, as participants performed the task with trained and untrained stimuli. Results show that participants underwent significant learning in this task (as assessed by threshold, accuracy, and reaction time improvements) and that alpha power increased during the pre-stimulus period and then underwent greater desynchronization at the time of stimulus presentation following training. However, these changes in alpha-band activity were not specific to the trained stimuli, with similar patterns of posttraining alpha power for trained and untrained stimuli. These data are consistent with the view that participants were more efficient at focusing resources at the time of stimulus presentation and are consistent with a greater automaticity of task performance. These findings have implications for PL, as transfer effects from trained to untrained stimuli may partially depend on differential effort of the individual at the time of stimulus processing.
This study examined how different forms of decision-making modulate time perception. Participants performed temporal bisection and generalization tasks, requiring them to either categorize a stimulus duration as more similar to short or long standards (bisection), or identify whether or not a duration was the same as a previously-presented standard (generalization). They responded faster in the bisection task than in the generalization one for long durations. This behavioral effect was accompanied by modulation of event-related potentials (ERPs). More specifically, between 500 ms and 600 ms after stimulus offset, a late positive component (LPC), appearing in the centro-parietal region, showed lower amplitude in the bisection task than in the generalization one, for long durations, mirroring the behavioral result. Before (200-500 ms) and after (600-800 ms) this window, the amplitude of the LPC was globally larger in the generalization paradigm, independently of the presented duration. Finally, the latency of the LPC's peak was earlier for long durations than for the short ones, indicating that the decision about the former stimuli was made earlier than for the latter ones. Taken together, these results indicate that the categorization of durations engages fewer cognitive resources than their identification.
The mechanisms guiding our learning and memory processes are of key interest to human cognition. While much research shows that attention and reinforcement processes help guide the encoding process, there is still much to know regarding how our brains choose what to remember. Recent research of task-irrelevant perceptual learning (TIPL) has found that information presented coincident with important events is better encoded even if participants are not aware of its presence (see Seitz & Watanabe, 2009). However a limitation of existing studies of TIPL is that they provide little information regarding the depth of encoding supported by pairing a stimulus with a behaviorally relevant event. The objective of this research was to understand the depth of encoding of information that is learned through TIPL. To do so, we adopted a variant of the "remember/know" paradigm, recently reported by Ingram, Mickes, and Wixted (2012), in which multiple confidence levels of both familiar (know) and remember reports are reported (Experiment 1), and in which episodic information is tested (Experiment 2). TIPL was found in both experiments, with higher recognition performance for target-paired than for distractor-paired images. Furthermore, TIPL benefitted both "familiar" and "remember" reports. The results of Experiment 2 indicate that the most confident "remember" response was associated with episodic information, where participants were able to access the location of image presentation for these items. Together, these results indicate that TIPL results in a deep enhancement in the encoding of target-paired information.
When we perform any task, we engage a diverse set of processes. These processes can be optimized with learning. While there exists substantial research that probes specific aspects of learning, there is a scarcity of research regarding interactions between different types of learning. Here, we investigate possible interactions between Perceptual Learning (PL) and Contextual Learning (CL), two types of implicit learning that have garnered much attention in the psychological sciences and that often co-occur in natural settings. PL increases sensitivity to features of task targets and distractors and is thought to involve improvements in low-level perceptual processing. CL regards learning of regularities in the environment (such as spatial relations between objects) and is consistent with improvements in higher level perceptual processes. Surprisingly, we found CL, PL for target features, and PL for distractor features to be independent. This triple dissociation demonstrates how different learning processes may operate in parallel as tasks are mastered.
Research of perceptual learning has received significant interest due to findings that training on perceptual tasks can yield learning effects that are specific to the stimulus features of that task. However, recent studies have demonstrated that while training a single stimulus at a single location can yield a high-degree of stimulus specificity, training multiple features, or at multiple locations can reveal a broad transfer of learning to untrained features or stimulus locations. We devised a high resolution, high capacity, perceptual learning procedure with the goal of testing whether spatial specificity can be found in cases where observers are highly trained to discriminate stimuli in many different locations in the visual field. We found a surprising degree of location specific learning, where performance was significantly better when target stimuli were presented at 1 of the 24 trained locations compared to when they were placed in 1 of the 12 untrained locations. This result is particularly impressive given that untrained locations were within a couple degrees of visual angle of those that were trained. Given the large number of trained locations, the fact that the trained and untrained locations were interspersed, and the high-degree of spatial precision of the learning, we suggest that these results are difficult to account for using attention or decision strategies and instead suggest that learning may have taken place for each location separately in retinotopically organized visual cortex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.