When two masked targets are presented in rapid succession, correct identification of the first target (T1) leads to a dramatic impairment in identification of the second target (T2). Several studies of this so-called attentional blink (AB) phenomenon have provided behavioral and physiological evidence that T2 is processed to the semantic level, despite the profound impairment in T2 report. These findings have been interpreted as an example of perception without awareness and have been explained by models that assume that T2 is processed extensively even though it does not gain access into consciousness. The present study reports two experiments that test this assumption. In Experiment 1, the perceptual load of the T1 task was manipulated and T2 was a word that was either related or unrelated to a context word presented at the beginning of each trial. The event-related potential (ERP) technique was used to isolate the context-sensitive N400 component evoked by the T2 word. The ERP data revealed that there was a complete suppression of the N400 during the AB when the perceptual load was high, but not when perceptual load was low. Experiment 2 replicated the high-load condition of Experiment 1 while ruling out two alternative explanations for the reduction of the N400 during the AB. The results of both experiments demonstrate that word meanings are not always accessed during the AB and are consistent with studies that suggest that attention can act to select information at multiple stages of processing depending on concurrent task demands.
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment.
A prevalent view of visual working memory (VWM) is that visual information is actively maintained in the form of perceptually integrated objects. Such reliance on object-based representations would predict that after an object is fully encoded into VWM, all features of that object would need to be maintained as a coherent unit. Here, we evaluated this idea by testing whether memory resources can be redeployed to a specific feature of an object already stored in VWM. We found that observers can utilize a retrospective cue presented during the maintenance period to attenuate both the gradual deterioration and complete loss of memory for a cued feature over time, but at the cost of accelerated loss of information regarding the uncued feature. Our findings demonstrate that object representations held within VWM can be decomposed into individual features, and that having to retain additional features imposes greater demands on active maintenance processes.
Normal binocular vision emerges from the combination of neural signals arising within separate monocular pathways. It is natural to wonder whether both eyes contribute equally to the unified cyclopean impression we ordinarily experience. Binocular rivalry, which occurs when the inputs to the two eyes are markedly different, affords a useful means for quantifying the balance of influence exerted by the eyes (called sensory eye dominance, SED) and for relating that degree of balance to other aspects of binocular visual function. However, the precise ways in which binocular rivalry dynamics change when the eyes are unbalanced remain uncharted. Relying on widespread individual variability in the relative predominance of the two eyes as demonstrated in previous studies, we found that an observer’s overall tendency to see one eye more than the other was driven both by differences in the relative duration and frequency of instances of that eye’s perceptual dominance. Specifically, larger imbalances between the eyes were associated with longer and more frequent periods of exclusive dominance for the stronger eye. Increases in occurrences of dominant eye percepts were mediated in part by a tendency to experience “return transitions” to the predominant eye – that is, observers often experienced sequential exclusive percepts of the dominant eye’s image with an intervening mixed percept. Together, these results indicate that the often-observed imbalances between the eyes during binocular rivalry reflect true differences in sensory processing, a finding that has implications for our understanding of the mechanisms underlying binocular vision in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.