This paper describes a new technique for computing 3D position and orientation of a camera relative to the last joint of a robot manipulator in an eye-on-hand configuration. This is part of a trio for real-time 3D robotics eye, eye-to-hand, and hand calibrations, which use a common setup and calibration object, common coordinate systems, matrices, vectors, symbols, and operations throughout the trio, and is especially suited to machine vision community. It is easier and faster than any of the existing techniques, and is ten times more accurate in rotation than any existing technique using standard resolution cameras, and equal to the state-of-the-art vision based technique in terms of linear accuracy. The robot makes a series of automatically planned movements with a camera rigidly mounted at the gripper. At the end of each move, it takes a total of 90 ms to grab an image, extract image feature coordinates, and perform camera extrinsic calibration. After the robot finishes all the movements, it takes only a few milliseconds to do the calibration. A series of generic geometric properties or lemmas are presented, leading to the derivation of the final algorithms, which are aimed at simplicity, efficiency, and accuracy while giving ample geometric and algebraic insights. Besides describing the new technique, critical factors influencing the accuracy are analyzed, and procedures for improving accuracy are introduced. Test results of both simulation and real experiments on an IBM Cartesian robot are reported and analyzed. 1) Camera Calibration (see [6], [ 101, [ 1 11, [ 131). 2) Robot Eye-to-Hand Calibration (this paper). 3) Cartesian Robot Hand Calibration [5].
This paper describes techniques for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, i.e. the factor that relates the sensor element spacing of a discrete array camera to the picture element spacing after sampling by the image acquisition circuitry, and the image center, i.e. the intersection of the optical axis with the camera sensor. The scale factor calibration uses a 1D-FFT and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree of freedom adjustment of its orientation, but is simplest in concept, and is accurate and reproducible. Group 11 is simple to perform, but is less accurate than the other two. The most general Group I11 is accurate and efficient, but requires accurate image feature extraction of calibration points with known 3D coordinates. A feasible setup is described.Results of real experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3D measurement due to center calibration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.