Pointing to a visual target that disappears prior to movement requires the maintenance of a memory representation about the location of the target. It has been shown that a target can be stored egocentrically, allocentrically, or in both frames of reference simultaneously. The main goal of the present study was to compare the accuracy and kinematics of a pointing movement to a remembered target when egocentric, allocentric, or combined egocentric and allocentric coding was possible. The task was to localize, memorize, and reach to a remembered target. Condition 1 was the “no-context” condition and involved presenting the target in a completely dark environment (egocentric condition). For 2 other conditions, the target was presented within a visual context provided by an illuminated square. Condition 2 was the “stationary-context” condition and involved keeping the context at the same position during the whole trial (egocentric and/or allocentric coding). Condition 3 was a “moved-context” condition that involved shifting the context to a different location during the recall delay (allocentric coding). Movement accuracy and kinematics results were strikingly similar for the moved-context and stationary-context conditions. These results suggest that when both allocentric and egocentric coding are possible, an allocentric strategy is used.
Previous reports have shown that older adults have difficulties in maintaining allocentric information in memory but not egocentric information. The present study evaluated pointing accuracy in younger and older adults for egocentric and allocentric task. The task was to localize and maintain one, two, or four target locations. Target(s) were presented with or without a surrounding white square in a dimly lit environment. Despite previous postulations, the results of the present study revealed that older adults were able to point to remembered egocentric and allocentric targets as accurately as younger adults regardless of task difficulty. However, older adults took more time pointing to allocentric targets as compared to younger adults. The longer movement time was caused by a lengthening of the deceleration phase, suggesting that during pointing, older adults rely more on visual information surrounding the target than younger adults.
The present paper reviews a series of prehension experiments recently conducted at Simon Fraser University's Human Motor Systems Laboratory, and attempts to place them into the larger context of multi-segmental control theory. Two related lines of experiments are reported: (a) experiments involving prehension during walking, and (b) experiments involving trunk-assisted reaching. Three-dimensional analyses of movements were performed via both world-and body-centered coordinates. Our results are supportive of the idea that both types of tasks are carried out using task-specific synergies. Furthermore, we assert that the actions of these synergies are comprised of variable contributions of different movement systems and result in smooth, world-centered end-point trajectories. We show evidence that this “motor equivalence” is the result of increasing the complexity of a given task. Finally, the implications of the present findings on prevailing motor control theory are discussed in terms of the theoretical mechanisms underlying the coordination of the transport and grasp components of prehension.
A pointing task was performed both while subjects stood beside and while subjects walked past targets that involved differing movement amplitudes and differing sizes. The hand kinematics were considered relative both to a fixed frame of reference in the movement environment (end effector kinematics) and to the subject's body (kinematics of the hand alone). From the former view, there were few differences between standing and walking versions of the task, indicating similarity of the kinematics of the hand. However, when the hand was considered alone, marked differences in the kinematics and spatial trajectories between standing and walking were achieved. Furthermore, kinematic analyses of the trunk showed that subjects used differing amounts of both flexion-extension and rotation movements at the waist depending on whether they were standing or walking as well as on the constraints imposed by target width and movement amplitude. The present results demonstrate the existence of motor equivalence in a combined upper and lower extremity task and that this motor equivalence is a control strategy to cope with increasing task demands. Given the complexity involved in controlling the arm, the torso, and the legs (during locomotion), the movements involved in the present tasks appear to be planned and controlled by considering the whole body as a single unit.
Figure 1. Columns show low, medium, and high levels of texture-based biofeedback. Rows show customizations of the same effect for two different games: top) Static Sprite (cracks) over Portal 2, bottom) Static Sprite (mud) over Nail'd. ABSTRACTBiofeedback games help people maintain specific mental or physical states and are useful to help children with cognitive impairments learn to self-regulate their brain function. However, biofeedback games are expensive and difficult to create and are not sufficiently appealing to hold a child's interest over the long term needed for effective biofeedback training. We present a system that turns off-the-shelf computer games into biofeedback games. Our approach uses texture-based graphical overlays that vary in their obfuscation of underlying screen elements based on the sensed physiological state of the child. The textures can be visually customized so that they appear to be integrated with the underlying game. Through a 12-week deployment, with 16 children with Fetal Alcohol Spectrum Disorder, we show that our solution can hold a child's interest over a long term, and balances the competing needs of maintaining the fun of playing, while providing effective biofeedback training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.