The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use ofcircular target features as the features that can be most accurately located. This paper describes extensive analysis on circle ceniroid cury using both simulations and laboratory measurements. The work was part of an effort to design a Video Positioning Sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To charterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.
This paper describes an algorithm that uses optical flow to detect landing hazards for a descending spacecraft. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. A novel variablesized edge detector is used to to compensate for the change in distance from one image to the next. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement are also modeled. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. The algorithm has been applied to synthetic and real image sequences, with resulting range accuracy on the order of 1-3% of the range.The ability to sense depth or range is important for autonomous robots that must operate in unstructured environments. Depth information can be used for obstacle avoidance, navigation, and object recognition. One technique of determining depth passively is to use the sequence of images taken by a moving camera as it moves through a static environment. The apparent motion (or opticalflow) of points in the images, when combined with the known motion of the camera, can be used to unambiguously estimate the depth to those points.In this paper, we describe an application of this technique to the problem of autonomous hazard avoidance for planetary terminal descent. Future planetary missions such as NASA's Mars Rover Sample Return mission will involve landing unmanned spacecraft in scientifically interesting areas of a planet. These areas may contain hazardous surface features such as large rocks, craters, fissures, and lava flows, which could be fatal to the spacecraft if it landed on them. One approach is to provide the landing spacecraft with an onboard hazard avoidance capability. Recently, Martin Marietta has developed techniques for hazard avoidance and incorporated them into a high fidelity simulation of a Mars landing scenario, using a moving base camage and a scaled terrain board [Cuse88]. The hazard avoidance techniques used to date have been adequate to detect ,hazards such as boulder fields and lava flows, but not to detect hazards such as a steeply sloping surface, that has a smooth texture but is pitched at such a steep angle (> 15') that the landing spacecraft would topple over.We have developed a Kalman filter-based algorithm to detect such slope hazards. Although similar Kalman filter formalisms have been developed previously ([Matt88], [Srid89]), this algorithm makes use of a novel variable-sized edge detector to compensate for the change in distance from one image to the next. The algorithm has been applied to a number of synthetic image sequences and one real image sequence of terrain. RELATIONSHIP TO P REVIOUS WO RYA large class of algorithms compute motion (translation and rotation) from image sequences. In this ...
Industrial and space applications present environments in which it is possible, and in fact desirable to solve robotic problems using a model-based approach. From a sensory standpoint, the reasons for employing knowledge about objects to be manipulated are twofold. First, such knowledge permits high-level expectation driven reasoning as opposed to low-level data driven searches for primitive features. This is advantageous since purely data driven feature extraction is typically undirected and the search space is unconstrained. The second reason is that expectation driven reasoning can exploit knowledge derived from features that have already been found, thus expediting subsequent searches. Conversely, however, there is a rigid requirement to specify the geometry and kinematics for object models about which reasoning is to occur.This paper describes a model-based computer vision system that has been coupled with a robot arm for the purpose of accurately reasoning about entities on a reconfigurable task panel. The final goal of the integrated system is to be able to manipulate substructures such as hinged doors and laterally translatable drawers using computer vision as the primary sensory input. This overall objective is accomplished by first locating the camera at a position where it can view the entire panel such that an initial worksite registration can be computed. Next, an approximation for each substructure's spatial configuration is determined by employing kinematic and geometric knowledge using a generate and test paradigm. This step is followed by repositioning the robotically mounted camera to a location and orientation that is preferable for further, more accurate spatial inferences. The camera is automatically recalibrated at the new location and a final move is made to grasp and open or close the specified substructure. The primary advantage of the approach is that final moves can be achieved within a few millimeters of ideal target locations, even when target objects are initially viewed from locations which initially produce poor pose estimation results, since object pose estimations are successively refined as the result of information obtained at new viewpoints.In addition to describing the mechanisms and algorithms that are utilized in the research, a comparison of the accuracy of the results obtained from both non-repositionable and repositionable sensor based spatial reasoning systems is presented. 70Magee, Hoff, Gatrell, Sklair and Wolfe
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.