The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1.
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
This article pursues the hypothesis that a scale-invariant representation of history could support performance in a variety of learning and memory tasks. This representation maintains a conjunctive representation of what happened when that grows continuously less accurate for events further and further in the past. Simple behavioral models using a few operations, including scanning, matching and a "jump back in time" that recovers previous states of the history, describe a range of behavioral phenomena. These behavioral applications include canonical results from the judgment of recency task over short and long scales, the recency and contiguity effect across scales in episodic recall, and temporal mapping phenomena in conditioning. A growing body of neural data suggests that neural representations in several brain regions have qualitative properties predicted by the representation of temporal history. Taken together, these results suggest that a scale-invariant representation of temporal history may serve as a cornerstone of a physical model of cognition in learning and memory.
Episodic memory, which depends critically on the integrity of the medial temporal lobe (MTL), has been described as “mental time travel” in which the rememberer “jumps back in time.” The neural mechanism underlying this ability remains elusive. Mathematical and computational models of performance in episodic memory tasks provide a specific hypothesis regarding the computation that supports such a jump back in time. The models suggest that a representation of temporal context, a representation that changes gradually over macroscopic periods of time, is the cue for episodic recall. According to these models, a jump back in time corresponds to a stimulus recovering a prior state of temporal context. In vivo single-neuron recordings were taken from the human MTL while epilepsy patients distinguished novel from repeated images in a continuous recognition memory task. The firing pattern of the ensemble of MTL neurons showed robust temporal autocorrelation over macroscopic periods of time during performance of the memory task. The gradually-changing part of the ensemble state was causally affected by the visual stimulus being presented. Critically, repetition of a stimulus caused the ensemble to elicit a pattern of activity that resembled the pattern of activity present before the initial presentation of the stimulus. These findings confirm a direct prediction of this class of temporal context models and may be a signature of the mechanism that underlies the experience of episodic memory as mental time travel.
Abstract. The radiation gauges used by Chrzanowski (his IRG/ORG) for metric reconstruction in the Kerr spacetime seem to be over-specified. Their specification consists of five conditions: four, which we treat here as valid gauge conditions, plus an additional condition on the trace of the metric perturbation. In this work, we utilize a newly developed form of the perturbed Einstein equations to establish a conditionon a particular tetrad component of the stress-energy tensor -under which the full IRG/ORG can be imposed. Using gauge freedom, we are able to impose the full IRG for Petrov type II and type D backgrounds, using a different tetrad for each case. As a specific example, we work through the process of imposing the IRG in a Schwarzschild background, using a more traditional approach. Implications for metric reconstruction using the Teukolsky curvature perturbations in type D spacetimes are briefly discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.