Figure 1: Catching a thrown ball. The movement depends on visual estimates of the ball's motion, which trigger shared motor programs for eye, head, arm, and torso movement. The gaze sets the goal for the hand. Initially the movements are reactive, but as visual estimates improve predictive movements are generated to the final catching position. AbstractWe present a novel framework for animating human characters performing fast visually guided tasks, such as catching a ball. The main idea is to consider the coordinated dynamics of sensing and movement. Based on experimental evidence about such behaviors, we propose a generative model that constructs interception behavior online, using discrete submovements directed by uncertain visual estimates of target movement. An important aspect of this framework is that eye movements are included as well, and play a central role in coordinating movements of the head, hand, and body. We show that this framework efficiently generates plausible movements and generalizes well to novel scenarios.
We describe and test a non-linear control algorithm inspired by the behavior of motor neurons in humans and other animals during extremely fast saccadic eye movements. The algorithm is implemented on a robotic eye, which includes a stiff camera cable, similar to the optic nerve, which adds a complicated non-linear stiffness to the plant. For high speed movement, our "pulse-step" controller operates openloop using an internal model of the eye plant learned from past measurements. We show that the controller approaches the performance seen in the human eye, producing fast movements with little overshoot. Interestingly, the controller reproduces the main sequence relationship observed in animal eye movements.
We describe a system for active stabilization of cameras mounted on highly dynamic robots. To focus on careful performance evaluation of the stabilization algorithm, we use a camera mounted on a robotic test platform that can have unknown perturbations in the horizontal plane, a commonly occurring scenario in mobile robotics. We show that the camera can be effectively stabilized using an inertial sensor and a single additional motor, without a joint position sensor. The algorithm uses an adaptive controller based on a model of the vertebrate Cerebellum for velocity stabilization, with additional drift correction. We have also developed a resolution adaptive retinal slip algorithm that is robust to motion blur.We evaluated the performance quantitatively using another high speed robot to generate repeatable sequences of large and fast movements that a gaze stabilization system can attempt to counteract. Thanks to the high-accuracy repeatability, we can make a fair comparison of algorithms for gaze stabilization. We show that the resulting system can reduce camera image motion to about one pixel per frame on average even when the platform is rotated at 200 degrees per second. As a practical application, we also demonstrate how the common task of face detection benefits from active gaze stabilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.