The ability to hold information in working memory (WM) is fundamental for cognition. Contrary to the longstanding view that WM depends on sustained, elevated activity, we present evidence suggesting that information can be held in WM via “activity-silent” synaptic mechanisms. Using machine learning to decode brain activity patterns, we show that the active representation of an item in WM drops to baseline when attention shifts away. A targeted pulse of transcranial magnetic stimulation produces a brief reemergence of the item in concurrently measured brain activity. This reactivation effect only occurs and influences memory performance when the item is potentially relevant later in the trial, suggesting that the representation is dynamic and modifiable via cognitive control. The results support a Synaptic Theory of Working Memory.
Navigation is an inherently dynamic and multimodal process, making isolation of the unique cognitive components underlying it challenging. The assumptions of much of the literature on human spatial navigation are that 1) spatial navigation involves modality independent, discrete metric representations (i.e., egocentric vs. allocentric), 2) such representations can be further distilled to elemental cognitive processes, and 3) these cognitive processes can be ascribed to unique brain regions. We argue that modality-independent spatial representations, instead of providing exact metrics about our surrounding environment, more often involve heuristics for estimating spatial topology useful to the current task at hand. We also argue that egocentric (body centered) and allocentric (world centered) representations are better conceptualized as involving a continuum rather than as discrete. We propose a neural model to accommodate these ideas, arguing that such representations also involve a continuum of network interactions centered on retrosplenial and posterior parietal cortex, respectively. Our model thus helps explain both behavioral and neural findings otherwise difficult to account for with classic models of spatial navigation and memory, providing a testable framework for novel experiments.
Although the manipulation of load is popular in visual working memory research, many studies confound general attentional demands with context binding by drawing memoranda from the same stimulus category. In this fMRI study of human observers (both sexes), we created high- versus low-binding conditions, while holding load constant, by comparing trials requiring memory for the direction of motion of one random dot kinematogram (RDK; 1M trials) versus for three RDKs (3M), or versus one RDK and two color patches (1M2C). Memory precision was highest for 1M trials and comparable for 3M and 1M2C trials. And although delay-period activity in occipital cortex did not differ between the three conditions, returning to baseline for all three, multivariate pattern analysis decoding of a remembered RDK from occipital cortex was also highest for 1M trials and comparable for 3M and 1M2C trials. Delay-period activity in intraparietal sulcus (IPS), although elevated for all three conditions, displayed more sensitivity to demands on context binding than to load per se. The 1M-to-3M increase in IPS signal predicted the 1M-to-3M declines in both behavioral and neural estimates of working memory precision. These effects strengthened along a caudal-to-rostral gradient, from IPS0 to IPS5. Context binding-independent load sensitivity was observed when analyses were lateralized and extended into PFC, with trend-level effects evident in left IPS and strong effects in left lateral PFC. These findings illustrate how visual working memory capacity limitations arise from multiple factors that each recruit dissociable brain systems. Visual working memory capacity predicts performance on a wide array of cognitive and real-world outcomes. At least two theoretically distinct factors are proposed to influence visual working memory capacity limitations: an amodal attentional resource that must be shared across remembered items; and the demands on context binding. We unconfounded these two factors by varying load with items drawn from the same stimulus category ("high demands on context binding") versus items drawn from different stimulus categories ("low demands on context binding"). The results provide evidence for the dissociability, and the neural bases, of these two theorized factors, and they specify that the functions of intraparietal sulcus may relate more strongly to the control of representations than to the general allocation of attention.
When a test of working memory (WM) requires the retention of multiple items, a subset of them can be prioritized. Recent studies have shown that, although prioritized (i.e., attended) items are associated with active neural representations, unprioritized (i.e., unattended) memory items can be retained in WM despite the absence of such active representations, and with no decrement in their recognition if they are cued later in the trial. These findings raise two intriguing questions about the nature of the short-term retention of information outside the focus of attention. First, when the focus of attention shifts from items in WM, is there a loss of fidelity for those unattended memory items? Second, could the retention of unattended memory items be accomplished by long-term memory mechanisms? We addressed the first question by comparing the precision of recall of attended versus unattended memory items, and found a significant decrease in precision for unattended memory items, reflecting a degradation in the quality of those representations. We addressed the second question by asking subjects to perform a WM task, followed by a surprise memory test for the items that they had seen in the WM task. Long-term memory for unattended memory items from the WM task was not better than memory for items that had remained selected by the focus of attention in the WM task. These results show that unattended WM representations are degraded in quality and are not preferentially represented in long-term memory, as compared to attended memory items.
Numerous reports have demonstrated low-frequency oscillations during navigation using invasive recordings in the hippocampus of both rats and human patients. Given evidence, in some cases, of low-frequency synchronization between midline cortex and hippocampus, it is also possible that low-frequency movement-related oscillations manifest in healthy human neocortex. However, this possibility remains largely unexplored, in part due to the difficulties of coupling free ambulation and effective scalp EEG recordings. In the current study, participants freely ambulated on an omnidirectional treadmill and explored an immersive virtual reality city rendered on a head-mounted display while undergoing simultaneous wireless scalp EEG recordings. We found that frontal-midline (FM) delta-theta (2-7.21 Hz) oscillations increased during movement compared to standing still periods, consistent with a role in navigation. In contrast, posterior alpha (8.32-12.76 Hz) oscillations were suppressed in the presence of visual input, independent of movement. Our findings suggest that FM delta-theta and posterior alpha oscillations arise at independent frequencies, under complementary behavioral conditions, and, at least for FM delta-theta oscillations, at independent recordings sites. Together, our findings support a double dissociation between movement-related FM delta-theta and resting-related posterior alpha oscillations. Our study thus provides novel evidence that FM delta-theta oscillations arise, in part, from real-world ambulation, and are functionally independent from posterior alpha oscillations.
An important question regards how we use environmental boundaries to anchor spatial representations during navigation. Behavioral and neurophysiological models appear to provide conflicting predictions, and this question has been difficult to answer because of technical challenges with testing navigation in novel, large-scale, realistic spatial environments. We conducted an experiment in which participants freely ambulated on an omnidirectional treadmill while viewing novel, town-sized environments in virtual reality on a head-mounted display. Participants performed interspersed judgments of relative direction (JRD) to assay their spatial knowledge and to determine when during learning they employed environmental boundaries to anchor their spatial representations. We designed JRD questions that assayed directions aligned and misaligned with the axes of the surrounding rectangular boundaries and employed structural equation modeling to better understand the learning-dependent dynamics for aligned versus misaligned pointing. Pointing accuracy showed no initial directional bias to boundaries, although such "alignment effects" did emerge after the fourth block of learning. Preexposure to a map in Experiment 2 led to similar overall findings. A control experiment in which participants studied a map but did not navigate the environment, however, demonstrated alignment effects after a brief, initial learning experience. Our results help to bridge the gap between neurophysiological models of location-specific firing in rodents and human behavioral models of spatial navigation by emphasizing the experience-dependent accumulation of route-specific knowledge. In particular, our results suggest that the use of spatial boundaries as an organizing schema during navigation of large-scale space occurs in an experience-dependent fashion. (PsycINFO Database Record
Research into the behavioral and neural correlates of spatial cognition and navigation has benefited greatly from recent advances in virtual reality (VR) technology. Devices such as head-mounted displays (HMDs) and omnidirectional treadmills provide research participants with access to a more complete range of body-based cues, which facilitate the naturalistic study of learning and memory in three-dimensional (3D) spaces. One limitation to using these technologies for research applications is that they almost ubiquitously require integration with video game development platforms, also known as game engines. While powerful, game engines do not provide an intrinsic framework for experimental design and require at least a working proficiency with the software and any associated programming languages or integrated development environments (IDEs). Here, we present a new asset package, called Landmarks, for designing and building 3D navigation experiments in the Unity game engine. Landmarks combines the ease of building drag-and-drop experiments using no code, with the flexibility of allowing users to modify existing aspects, create new content, and even contribute their work to the open-source repository via GitHub, if they so choose. Landmarks is actively maintained and is supplemented by a wiki with resources for users including links, tutorials, videos, and more. We compare several alternatives to Landmarks for building navigation experiments and 3D experiments more generally, provide an overview of the package and its structure in the context of the Unity game engine, and discuss benefits relating to the ongoing and future development of Landmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.