At any instant, our visual system allows us to perceive a rich and detailed visual world. Yet our internal, explicit representation of this visual world is extremely sparse: we can only hold in mind a minute fraction of the visual scene. These mental representations are stored in visual short-term memory (VSTM). Even though VSTM is essential for the execution of a wide array of perceptual and cognitive functions, and is supported by an extensive network of brain regions, its storage capacity is severely limited. With the use of functional magnetic resonance imaging, we show here that this capacity limit is neurally reflected in one node of this network: activity in the posterior parietal cortex is tightly correlated with the limited amount of scene information that can be stored in VSTM. These results suggest that the posterior parietal cortex is a key neural locus of our impoverished mental representation of the visual world.
AND RENt MAROISVanderbilt University. Nashville, Tennessee Under conditions of rapid serial visual presentation, subjects display a reduced ability to report the second of two targets (Target 2; T2) in a stream of distractors if it appears within 200-500 msec of Target I (Tl). This effect, known as the attentional blink (AB), has been central in characterizing the limits of humans' ability to consciously perceive stimuli distributed across time. Here, we review theoretical accounts of the AB and examine how they explain key f"mdings in the literature. We conclude that the AB arises from attentional demands ofTl for selection, working memory encoding, episodic registration, and response selection, which prevents this high-level central resource from being applied to T2 at short T 1-T2 lags. TI processing also transiently impairs the redeployment of these attentional resources to subsequent targets and the inhibition of distractors that appear in close temporal proximity to T2. Although these f"mdings are consistent with a multifactorial account of the AB, they can also be largely explained by assuming that the activation of these multiple processes depends on a common capacity-limited attentional process for selecting behaviorally relevant events presented among temporally distributed distractors. Thus, at its core, the attentional blink may ultimately reveal the temporal limits of the deployment of selective attention.Our visual environment constantly changes across the dimensions of both time and space. Within the ftrst few hundred milliseconds of viewing a scene, the visual system is bombarded with much more sensory information than it is able to process up to awareness. To overcome this limitation, humans are equipped with fIlters at a number of different levels of information processing. For example, high-resolution vision is restricted to the fovea, with acuity drastically reduced at the periphery. Such front-end mechanisms reduce the initial input; however, they still leave the visual system with an overwhelming amount of information to analyze. To meet this challenge, the human attentionaI system prioritizes salient stimuli (targets) that are to undergo extended processing and discards stimuli that are less relevant for behavior after only limited analysis (Broad-
A region in the lateral aspect of the fusiform gyrus (FG) is more engaged by human faces than any other category of image. It has come to be known as the 'fusiform face area' (FFA). The origin and extent of this specialization is currently a topic of great interest and debate. This is of special relevance to autism, because recent studies have shown that the FFA is hypoactive to faces in this disorder. In two linked functional magnetic resonance imaging (fMRI) studies of healthy young adults, we show here that the FFA is engaged by a social attribution task (SAT) involving perception of human-like interactions among three simple geometric shapes. The amygdala, temporal pole, medial prefrontal cortex, inferolateral frontal cortex and superior temporal sulci were also significantly engaged. Activation of the FFA to a task without faces challenges the received view that the FFA is restricted in its activities to the perception of faces. We speculate that abstract semantic information associated with faces is encoded in the FG region and retrieved for social computations. From this perspective, the literature on hypoactivation of the FFA in autism may be interpreted as a reflection of a core social cognitive mechanism underlying the disorder.
When humans attempt to perform two tasks at once, execution of the first task usually leads to postponement of the second one. This task delay is thought to result from a bottleneck occurring at a central, amodal stage of information processing that precludes two response selection or decision-making operations from being concurrently executed. Using time-resolved functional magnetic resonance imaging (fMRI), here we present a neural basis for such dual-task limitations, e.g. the inability of the posterior lateral prefrontal cortex, and possibly the superior medial frontal cortex, to process two decision-making operations at once. These results suggest that a neural network of frontal lobe areas acts as a central bottleneck of information processing that severely limits our ability to multitask.
Attention selects which sensory information is preferentially processed and ultimately reaches our awareness. Attention, however, is not a unitary process: It can be captured by unexpected or salient events (stimulus-driven) or it can be deployed under voluntary control (goal-directed), and these two forms of attention are implemented by largely distinct ventral and dorsal parieto-frontal networks. Yet, for coherent behavior and awareness to emerge, stimulus-driven and goal-directed behavior must ultimately interact. Here we show that the ventral, but not dorsal, network can account for stimulus-driven attentional limits to conscious perception, and that it is in the lateral prefrontal component of that network where stimulus-driven and goal-directed attention converge. Although these results do not rule out dorsal network involvement in awareness when goal-directed task demands are present, they point to a general role for the lateral prefrontal cortex in the control of attention and awareness.
Summary Our ability to multitask is severely limited: Task performance deteriorates when we attempt to undertake two or more tasks simultaneously. Remarkably, extensive training can greatly reduce such multitasking costs. While it is not known how training alters the brain to solve the multitasking problem, it likely involves the prefrontal cortex given this brain region’s purported role in limiting multitasking performance. Here we show that the reduction of multitasking interference with training is not achieved by diverting the flow of information processing away from the prefrontal cortex, or by segregating prefrontal cells into independent task-specific neuronal ensembles, but rather by increasing the speed of information processing in this brain region, thereby allowing multiple tasks to be processed in rapid succession. These results not only reveal how training leads to efficient multitasking, they also provide a mechanistic account of multitasking limitations, namely the poor speed of information processing in human prefrontal cortex.
An influential theory suggests that integrated objects, rather than individual features, are the fundamental units that limit our capacity to temporarily store visual information (S. J. Luck & E. K. Vogel, 1997). Using a paradigm that independently estimates the number and precision of items stored in working memory (W. Zhang & S. J. Luck, 2008), here we show that the storage of features is not cost-free. The precision and number of objects held in working memory was estimated when observers had to remember either the color, the orientation, or both the color and orientation of simple objects. We found that while the quantity of stored objects was largely unaffected by increasing the number of features, the precision of these representations dramatically decreased. Moreover, this selective deterioration in object precision depended on the multiple features being contained within the same objects. Such fidelity costs were even observed with change detection paradigms when those paradigms placed demands on the precision of the stored visual representations. Taken together, these findings not only demonstrate that the maintenance of integrated features is costly; they also suggest that objects and features affect visual working memory capacity differently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.