This work explored how the presence of graphical information about selfmovement affected reach-to-grasp movements in an augmented environment. Twelve subjects reached to grasp objects that were passed by a partner or rested on a table surface. Graphical feedback about self-movement was available for half the trials and was removed for the other half. Results indicated that removing visual feedback about self-movement in an object-passing task dramatically affected both the receiver's movement to grasp the object and the time to transfer the object between partners. Specifically, the receiver's deceleration time, and temporal and spatial aspects of grasp formation, showed significant effects. Results also indicated that the presence of a graphic representation of self-movement had similar effects on the kinematics of reaching to grasp a stationary object on a table as for one held by a stationary or moving partner. These results suggest that performance of goal-directed movements, whether to a stationary object on a table surface or to objects being passed by a stationary or moving partner, benefits from a crude graphical representation of the finger pads. The role of providing graphic feedback about self-movement is discussed for tasks requiring precision. Implications for the use of kinematic measures in the field of Human-Computer Interaction (HCI) are also discussed.
IntroductionAs humans, we have the exquisite ability to successfully perform a tremendous number of complex actions with seeming ease. One of the ways in which these complex movements is achieved is through the use of sensory information, gathered from the environment by exteroceptors (e.g., eyes, ears). This sensory information is gathered before the initiation of the movement, for use in generating a motor plan that will direct the performance of the complex action. In addition, sensory feedback is used on-line during the production of movements to fine-tune our actions such that success can be achieved. One of the most critical organs for providing information about objects and their movement in the environment is the eye. Visual feedback is crucial for the accurate and successful performance of many motor activities in both natural and computer-generated environments.In natural environments, sources of visual information are rich, spatially and temporally accurate, and readily available. However, due to limitations in curMason and MacKenzie 507