Falls by older adults often result in reduced quality of life and debilitating fear of further falls. Stopping walking when talking (SWWT) is a significant predictor of future falls by older adults and is thought to reflect age-related increases in attentional demands of walking. We examine whether SWWT is associated with use of explicit movement cues during locomotion, and evaluate if conscious control (i.e. movement specific reinvestment) is causally linked to fall-related anxiety during a complex walking task. We observed whether twenty-four older adults stopped walking when talking when asked a question during an adaptive gait task. After certain trials, participants completed a visuospatial recall task regarding walkway features, or answered questions about their movements during the walk. In a subsequent experimental condition, participants completed the walking task under conditions of raised postural threat. Compared to a control group, participants who SWWT reported higher scores for aspects of reinvestment relating to conscious motor processing but not movement self-consciousness. The higher scores for conscious motor processing were preserved when scores representing cognitive function were included as a covariate. There were no group differences in measures of general cognitive function, visuospatial working memory or balance confidence. However, the SWWT group reported higher scores on a test of external awareness when walking, indicating allocation of attention away from task-relevant environmental features. Under conditions of increased threat, participants self-reported significantly greater state anxiety and reinvestment and displayed more accurate responses about their movements during the task. SWWT is not associated solely with age-related cognitive decline or generic increases in age-related attentional demands of walking. SWWT may be caused by competition for phonological resources of working memory associated with consciously processing motor actions and appears to be causally linked with fall-related anxiety and increased vigilance.
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert’s eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics – including visual search, saccade direction, and entropy – were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms.
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert’s eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics – including visual search, saccade direction, and entropy – were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms.
Simulation methods, including physical synthetic environments, already play a substantial role in human skills training in the military and are commonly used for developing situational awareness and judgemental skills. The rapid development of virtual reality technologies has provided a new opportunity for performing this type of training, but before VR can be adopted as part of mandatory training it should be subjected to rigorous tests of its suitability and effectiveness. In this work, we adopted established methods for testing the fidelity and validity of simulated environments to compare three different methods of judgemental training. Thirty-nine dismounted close combat troops from the UK’s Royal Air Force completed shoot/don’t-shoot judgemental tasks in: i) live-fire; ii) virtual reality; and iii) 2D video simulation conditions. A range of shooting accuracy and decision-making metrics were recorded from all three environments. The results showed that 2D video simulation posed little decision-making challenge during training. Decision-making performance across live fire and virtual reality simulations was comparable but the two may offer slightly different, and perhaps complementary, methods of training judgemental skills. Different types of simulation should, therefore, be selected carefully to address the exact training need.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.