Episodic memory was assessed using Virtual Reality (VR). Forty-four (44) subjects visualized a target virtual apartment containing specific objects in each room. Then they visualized a second virtual apartment comprised of specific objects and objects shared by the two apartments. Subjects navigated in the virtual apartments in one of the following two conditions: active and passive. Four main episodic memory components were scored from the VR exposures: (1) learning effect; (2) active forgetting effect; (3) strategies at encoding and at retrieval; and (4) false recognitions (FRs). The effect of navigation mode (active vs. passive) on each memory component was examined. Active subjects had better learning and retrieval (recognition hits) performances compared to passive subjects. A beneficial effect of active navigation was also observed on the source-based FR rates. Active subjects made fewer source-based FRs compared to passive subjects. These overall results for the effect of active navigation are discussed in terms of the distinction between item-specific and relational processing.
The purpose of this study was to evaluate the effect the visual fidelity of a virtual environment (VE) (undetailed vs. detailed) has on the transfer of spatial knowledge based on the navigation mode (passive vs. active) for three different spatial recall tasks (wayfinding, sketch mapping, and picture sorting). Sixty-four subjects (32 men and 32 women) participated in the experiment. Spatial learning was evaluated by these three tasks in the context of the Bordeaux district. In the wayfinding task, the results indicated that the detailed VE helped subjects to transfer their spatial knowledge from the VE to the real world, irrespective of the navigation mode. In the sketch-mapping task, the detailed VE increased performances compared to the undetailed VE condition, and allowed subjects to benefit from the active navigation. In the sorting task, performances were better in the detailed VE; however, in the undetailed version of the VE, active learning either did not help the subjects or it even deteriorated their performances. These results are discussed in terms of appropriate perceptive-motor and/or spatial representations for each spatial recall task.
The aim of this study was to evaluate in large-scale spaces wayfinding and spatial learning difficulties for older adults in relation to the executive and memory decline associated with aging. We compared virtual reality (VR)-based wayfinding and spatial memory performances between young and older adults. Wayfinding and spatial memory performances were correlated with classical measures of executive and visuo-spatial memory functions, but also with self-reported estimates of wayfinding difficulties. We obtained a significant effect of age on wayfinding performances but not on spatial memory performances. The overall correlations showed significant correlations between the wayfinding performances and the classical measures of both executive and visuo-spatial memory, but only when the age factor was not partialled out. Also, older adults underestimated their wayfinding difficulties. A significant relationship between the wayfinding performances and self-reported wayfinding difficulty estimates is found, but only when the age effect was partialled out. These results show that, even when older adults have an equivalent spatial knowledge to young adults, they had greater difficulties with the wayfinding task, supporting an executive decline view in age-related wayfinding difficulties. However, the correlation results are in favor of both the memory and executive decline views as mediators of age-related differences in wayfinding performances. This is discussed in terms of the relationships between memory and executive functioning in wayfinding task orchestration. Our results also favor the use of objective assessments of everyday navigation difficulties in virtual applications, instead of self-reported questionnaires, since older adults showed difficulties in estimating their everyday wayfinding problems.
The purpose of this study was to examine the effect of navigation mode (passive versus active) on the virtual/real transfer of spatial learning, according to viewpoint displacement (ground: 1 m 75 versus aerial: 4 m) and as a function of the recall tasks used. We hypothesize that active navigation during learning can enhance performances when route strategy is favored by egocentric match between learning (ground-level viewpoint) and recall (egocentric frame-based tasks). Sixty-four subjects (32 men and 32 women) participated in the experiment. Spatial learning consisted of route learning in a virtual district (four conditions: passive/ground, passive/aerial, active/ground, or active/aerial), evaluated by three tasks:wayfinding,sketch-mapping,andpicture-sorting. In thewayfinding task, subjects who were assigned the ground-level viewpoint in the virtual environment (VE) performed better than those with the aerial-level viewpoint, especially in combination with active navigation. In thesketch-mapping task, aerial-level learning in the VE resulted in better performance than the ground-level condition, while active navigation was only beneficial in the ground-level condition. The best performance in thepicture-sorting taskwas obtained with the ground-level viewpoint, especially with active navigation. This study confirmed the expected results that the benefit of active navigation was linked with egocentric frame-based situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.