Place cells of the rodent hippocampus constitute one of the most striking examples of a correlation between neuronal activity and complex behaviour in mammals. These cells increase their firing rates when the animal traverses specific regions of its surroundings, providing a context-dependent map of the environment. Neuroimaging studies implicate the hippocampus and the parahippocampal region in human navigation. However, these regions also respond selectively to visual stimuli. It thus remains unclear whether rodent place coding has a homologue in humans or whether human navigation is driven by a different, visually based neural mechanism. We directly recorded from 317 neurons in the human medial temporal and frontal lobes while subjects explored and navigated a virtual town. Here we present evidence for a neural code of human spatial navigation based on cells that respond at specific spatial locations and cells that respond to views of landmarks. The former are present primarily in the hippocampus, and the latter in the parahippocampal region. Cells throughout the frontal and temporal lobes responded to the subjects' navigational goals and to conjunctions of place, goal and view.
We studied the responses of single neurons in the human medial temporal lobe while subjects viewed familiar faces, animals, and landmarks. By progressively shortening the duration of stimulus presentation, coupled with backward masking, we show two striking properties of these neurons. (i) Their responses are not statistically different for the 33-ms, 66-ms, and 132-ms stimulus durations, and only for the 264-ms presentations there is a significantly higher firing. (ii) These responses follow conscious perception, as indicated by the subjects' recognition report. Remarkably, when recognized, a single snapshot as brief as 33 ms was sufficient to trigger strong single-unit responses far outlasting stimulus presentation. These results suggest that neurons in the medial temporal lobe can reflect conscious recognition by ''all-or-none'' responses.consciousness ͉ memory ͉ visual perception ͉ medial temporal lobe ͉ epilepsy O ur brain has the remarkable ability of creating coherent percepts despite constant changes in the visual environment. For example, our perception of a face is similar whether we see it for a fraction of a second or for much longer periods of time. Critically, below a certain temporal threshold, recognition appears to fail in an ''all-or-none'' fashion. Previous studies have addressed this question by using functional magnetic resonance imaging (fMRI) in humans (1). However, how our visual system represents this temporal nonlinearity at the single-neuron level is still an open question, because the fMRI signal gives only an indirect and temporally sluggish measure of the activity of large neural populations.Visual perception is processed along the ventral visual pathway, going from neurons in early visual areas extracting local visual features, to neurons in higher areas involved in the encoding and recognition of the actual object that is being seen (2-6). This processing culminates in the medial temporal lobe (MTL), which receives massive inputs from high-level visual areas. Converging evidence has shown that the MTL is not part of the recognition process per se (but see ref. 7), and it rather mediates the transformation of percepts into memories (8, 9). However, given their function in long-term memory storage, MTL neurons can indirectly ''signal'' perception processes because percepts should be represented in MTL if they are going to be stored in long-term memory for later recall. In particular, we recently reported the presence of neurons in the human MTL that fired selectively to different views of specific individuals, and in some cases even to their written name (10), thus showing the existence of an abstract representation that is invariant to basic visual features.To study the relationship of these MTL neurons to stimulus duration and how this correlates to conscious recognition, in the current study, we used different durations of stimulus presentations immediately followed by a mask. Stimulus durations were chosen to be at the threshold of recognition, so that the same visual stimulus c...
A seminal experiment found that the reported time of a decision to perform a simple action was at least 300 ms after the onset of brain activity that normally preceded the action. In Experiment 1, we presented deceptive feedback (an auditory beep) 5 to 60 ms after the action to signify a movement time later than the actual movement. The reported time of decision moved forward in time linearly with the delay in feedback, and came after the muscular initiation of the response at all but the 5-ms delay. In Experiment 2, participants viewed their hand with and without a 120-ms video delay, and gave a time of decision 44 ms later with than without the delay. We conclude that participants' report of their decision time is largely inferred from the apparent time of response. The perception of a hypothetical brain event prior to the response could have, at most, a small influence.
Humans, like many other species, employ three fundamental forms of strategies to navigate: allocentric, egocentric, and beacon. Here, we review each of these different forms of navigation with a particular focus on how our high-resolution visual system contributes to their unique properties. We also consider how we might employ allocentric and egocentric representations, in particular, across different spatial dimensions, such as 1-D vs. 2-D. Our high acuity visual system also leads to important considerations regarding the scale of space we are navigating (e.g., smaller, room-sized “vista” spaces or larger city-sized “environmental” spaces). We conclude that a hallmark of human spatial navigation is our ability to employ these representations systems in a parallel and flexible manner, which differ both as a function of dimension and spatial scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.