Keeping oriented in the environment is a multifaceted ability that requires knowledge of at least three pieces of information: one’s own location (“place”) and orientation (“heading”) within the environment, and which location in the environment one is looking at (“view”). We used functional magnetic resonance imaging (fMRI) in humans to examine the neural signatures of these information. Participants were scanned while viewing snapshots which varied for place, view and heading within a virtual room. We observed adaptation effects, proportional to the physical distances between consecutive places and views, in scene-responsive (retrosplenial complex and parahippocampal gyrus), fronto-parietal and lateral occipital regions. Multivoxel pattern classification of signals in scene-responsive regions and in the hippocampus allowed supra-chance decoding of place, view and heading, and revealed the existence of map-like representations, where places and views closer in physical space entailed activity patterns more similar in neural representational space. The pattern of hippocampal activity reflected both view- and place-based distances, the pattern of parahippocampal activity preferentially discriminated between views, and the pattern of retrosplenial activity combined place and view information, while the fronto-parietal cortex only showed transient effects of changes in place, view, and heading. Our findings provide evidence for the presence of map-like spatial representations which reflect metric distances in terms of both one’s own and landmark locations.
Monkey neurophysiology and human neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates a cortical network of temporal, parietal, insular, and cingulate visual motion regions. Here, we tested whether the human visual motion areas involved in processing optic flow signals simulating self-motion are also activated by active lower limb movements, and hence are likely involved in guiding human locomotion. To this aim, we used a combined approach of taskevoked activity and resting-state functional connectivity by fMRI. We localized a set of six egomotion-responsive visual areas (V6+, V3A, intraparietal motion/ventral intraparietal [IPSmot/VIP], cingulate sulcus visual area [CSv], posterior cingulate sulcus area [pCi], posterior insular cortex [PIC]) by using optic flow. We tested their response to a motor task implying long-range active leg movements. Results revealed that, among these visually defined areas, CSv, pCi, and PIC responded to leg movements (visuomotor areas), while V6+, V3A, and IPSmot/VIP did not (visual areas).Functional connectivity analysis showed that visuomotor areas are connected to the cingulate motor areas, the supplementary motor area, and notably to the medial portion of the somatosensory cortex, which represents legs and feet. We suggest that CSv, pCi, and PIC perform the visual analysis of egomotion-like signals to provide sensory information to the motor system with the aim of guiding locomotion. K E Y W O R D S brain mapping, CSv, functional connectivity, locomotion, optic flow, self-motion
Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotioncompatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion-and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known "localizer" fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotionselective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion-and scene-relevant information, likely for the control of navigation in a structured environment.
To plan movements toward objects our brain must recognize whether retinal displacement is due to self-motion and/or to object-motion. Here, we aimed to test whether motion areas are able to segregate these types of motion. We combined an eventrelated functional magnetic resonance imaging experiment, brain mapping techniques, and wide-field stimulation to study the responsivity of motion-sensitive areas to pure and combined self-and object-motion conditions during virtual movies of a train running within a realistic landscape. We observed a selective response in MT to the pure object-motion condition, and in medial (PEc, pCi, CSv, and CMA) and lateral (PIC and LOR) areas to the pure self-motion condition. Some other regions (like V6) responded more to complex visual stimulation where both object-and self-motion were present.Among all, we found that some motion regions (V3A, LOR, MT, V6, and IPSmot) could extract object-motion information from the overall motion, recognizing the real movement of the train even when the images remain still (on the screen), or moved, because of self-movements. We propose that these motion areas might be good candidates for the "flow parsing mechanism," that is the capability to extract object-motion information from retinal motion signals by subtracting out the optic flow components. K E Y W O R D Sarea V6, brain mapping, flow parsing, fMRI, optic flow, wide-field
Remembering object positions across different views is a fundamental competence for acting and moving appropriately in a large-scale space. Behavioural and neurological changes in elderly subjects suggest that the spatial representations of the environment might decline compared to young participants. However, no data are available on the use of different reference frames within topographical space in aging. Here we investigated the use of allocentric and egocentric frames in aging, by asking young and older participants to encode the location of a target in a virtual room relative either to stable features of the room (allocentric environment-based frame), or to an unstable objects set (allocentric objects-based frame), or to the viewer's viewpoint (egocentric frame). After a viewpoint change of 0° (absent), 45° (small) or 135° (large), participants judged whether the target was in the same spatial position as before relative to one of the three frames. Results revealed a different susceptibility to viewpoint changes in older than young participants. Importantly, we detected a worst performance, in terms of reaction times, for older than young participants in the allocentric frames. The deficit was more marked for the environment-based frame, for which a lower sensitivity was revealed as well as a worst performance even when no viewpoint change occurred. Our data provide new evidence of a greater vulnerability of the allocentric, in particular environment-based, spatial coding with aging, in line with the retrogenesis theory according to which cognitive changes in aging reverse the sequence of acquisition in mental development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.