The neural mechanisms underlying spatial cognition are modelled, integrating neuronal, systems and behavioural data, and addressing the relationships between long-term memory, short-term memory and imagery, and between egocentric and allocentric and visual and idiothetic representations. Long-term spatial memory is modeled as attractor dynamics within medialtemporal allocentric representations, and short-term memory as egocentric parietal representations driven by perception, retrieval and imagery, and modulated by directed attention. Both encoding and retrieval/ imagery require translation between egocentric and allocentric representations, mediated by posterior parietal and retrosplenial areas and utilizing head direction representations in Papez's circuit. Thus hippocampus effectively indexes information by real or imagined location, while Papez's circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows "spatial updating" of representations, while prefrontal simulated motor efference allows mental exploration. The alternating temporoparietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs.
The standard form of back-propagation learning is implausible as a model of perceptual learning because it requires an external teacher to specify the desired output of the network. We show how the external teacher can be replaced by internally derived teaching signals. These signals are generated by using the assumption that different parts of the perceptual input have common causes in the external world. Small modules that look at separate but related parts of the perceptual input discover these common causes by striving to produce outputs that agree with each other. The modules may look at different modalities (such as vision and touch), or the same modality at different times (for example, the consecutive two-dimensional views of a rotating three-dimensional object), or even spatially adjacent parts of the same image. Our simulations show that when our learning procedure is applied to adjacent patches of two-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discovery depth in random dot stereograms of curved surfaces.
Semantic priming is traditionally viewed as an effect that rapidly decays. A new view of long-term word priming in attractor neural networks is proposed. The model predicts long-term semantic priming under certain conditions. That is, the task must engage semantic-level processing to a sufficient degree. The predictions were confirmed in computer simulations and in 3 experiments. Experiment 1 showed that when target words are each preceded by multiple semantically related primes, there is long-lag priming on a semanticdecision task but not on a lexical-decision task. Experiment 2 replicated the long-term semantic priming effect for semantic decisions with only one prime per target. Experiment 3 demonstrated semantic priming with much longer word lists at lags of 0, 4, and 8 items. These are the first experiments to demonstrate a semantic priming effect spanning many intervening items and lasting much longer than a few seconds.Many forms of priming have been studied (for reviews, see Monsell, 1985;Richardson-Klavehn & Bjork, 1988;Schacter, 1987). Whereas in repetition priming the priming stimulus is identical to the target, in similarity-based priming tests (e.g., form priming, morphological priming, and semantic priming), the prime and target are different words sharing some surface features, semantic features, or both. Repetition priming and form priming have been found to produce long-lasting effects ranging from hours to weeks or even months (e.g., Bentin & Feldman, 1990; Bentin & Moscovitch, 1988;Jacoby & Dallas, 1981;Rueckl, 1990;Sloman, Hayman, Ohta, Law, & Tulving, 1988). Semantic priming, however, is traditionally thought to produce only short-term effects that dissipate after several seconds or after more than one item intervenes between prime and target stimuli. Is it possible that completely different priming mechanisms are operating at semantic levels of processing as compared with other levels at which priming could occur? The most parsimonious account would be that the same mechanisms operate at all levels of the system. In this article, we are concerned particularly with long-term priming and argue in favor of a single mechanism to account for all types of long-term priming. Our view is that short-term semantic priming involves a process completely different from that underlying long-term priming, but either type of process should behave according to the same computational principles at any level of the system, whether it be perceptual or semantic. Although our account of long-term priming is very general, our focus is specifically on semantic priming because our model makes novel predictions in this domain. We first present a theoretical account of long-term priming based on a distributed cormectionist model of word recognition, combined with some very general learning-processing assumptions. The theory specifies conditions under which long-term priming should occur and predicts that semantic priming should produce long-term effects under the appropriate conditions (even though it has not been fo...
The computational role of the hippocampus in memory has been characterized as: (i) an index to disparate neocortical storage sites; (ii) a time-limited store supporting neocortical long-term memory; and (iii) a content-addressable associative memory. These ideas are reviewed and related to several general aspects of episodic memory, including the di¡erences between episodic, recognition and semantic memory, and whether hippocampal lesions di¡erentially a¡ect recent or remote memories. Some outstanding questions remain, such as: what characterizes episodic retrieval as opposed to other forms of read-out from memory; what triggers the storage of an event memory; and what are the neural mechanisms involved? To address these questions a neural-level model of the medial temporal and parietal roles in retrieval of the spatial context of an event is presented. This model combines the idea that retrieval of the rich context of real-life events is a central characteristic of episodic memory, and the idea that medial temporal allocentric representations are used in long-term storage while parietal egocentric representations are used to imagine, manipulate and re-experience the products of retrieval. The model is consistent with the known neural representation of spatial information in the brain, and provides an explanation for the involvement of Papez's circuit in both the representation of heading direction and in the recollection of episodic information. Two experiments relating to the model are brie£y described. A functional neuroimaging study of memory for the spatial context of life-like events in virtual reality provides support for the model's functional localization. A neuropsychological experiment suggests that the hippocampus does store an allocentric representation of spatial locations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.