Introductio nIn this paper, a new optical system for real-time , three-dimensional position tracking is described . This system adopts an "inside-out" tracking paradigm . The working environment is a room where the ceiling is lined wit h a regular pattern of infrared LEDs flashing under the system's control . Three cameras are mounted on a helme t which the user wears . Each camera uses a lateral effec t photodiode as the recording surface . The 2D positions o f the LED images inside the field of view of the cameras ar e detected and reported in real time . The measured 2D image positions and the known 3D positions of the LEDs ar e used to compute the position and orientation of the camer a assembly in space .We have designed an iterative algorithm to estimat e the 3D position of the camera assembly in space . The algorithm is a generalized version of the Church's method , and allows for multiple cameras with nonconvergent nodal points . Several equations are formulated to predict th e system's error analytically . The requirements of accuracy , speed, adequate working volume, light weight and small siz e of the tracker are also addressed .A prototype was designed and built to demonstrat e the integration and coordination of all essential component s of the new tracker . This prototype uses off-the-shelf components and can be easily duplicated . Our results indicat e that the new system significantly out-performs other existing systems . The new tracker provides more than 200 updates per second, registers 0 .1-degree rotational movement s and 2-millimeter translational movements, and processes a working volume about 1,000 ft 3 (10 ft on each side) .
We propose a unification framework for three-dimensional shape reconstruction using physically based models. A variety of 3D shape reconstruction techniques have been developed in the past two decades, such as shape from stereopsis, from shading, from texture gradient, and from structured lighting. However, the lack of a general theory that unifies these shape reconstruction techniques into one framework hinders the effort of a synergistical image interpretation scheme using multiple sensors/information sources. Most shapefrom-X techniques use an "observable" (e.g., the stereo disparity, intensity, or texture gradient) and a model, which is based on specific domain knowledge (e.g., the triangulation principle, reflectance function, or texture distortion equation) to predict the observable, in 3D shape reconstruction. We show that all these "observable-prediction-model'' types of techniques can be incorporated into our framework of energy constraint on a flexible, deformable image frame. In our algorithm, if the observable does not confirm to the predictions obtained using the corresponding model, a large "error" potential results. The error potential gradient forces the flexible image frame to deform in space. The deformation brings the flexible image frame to "wrap" onto the surface of the imaged 3D object. Surface reconstruction is thus achieved through a "package wrapping" or a "shape deformation" process by minimizing the discrepancy in the observable and the model prediction. The dynamics of such a wrapping process are governed by the least action principle which is physically correct. A physically based model is essential in this general shape reconstruction framework because of its capability to recover the desired 3D shape, to provide an animation sequence of the reconstruction, and to include the regularization principle into the theory of surface reconstruction.
We present a simple technique to model the structures and behaviors of flexible, elastic objects. We use an imaginary elastic wire frame, which is made of membranous, thin-plate-type material, to model the surface structures of flexible objects. We demonstrate that in computational vision, such a flexible wire frame can be used for visual surface reconstruction with the structured-light sensing technique. In graphic animation, we allow animation sequences to be generated automatically between prespecified key frames, the surface structures of which are described by our flexible mode. Furthermore, we allow collisions of objects' trajectories so that interactions of multiple flexible objects can be simulated. We believe that our technique is widely applicable in many computational vision and graphic animation processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.