An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.
In order to overcome the restrictions of traditional robot-sensor calibration method which solve the tool-camera transformation and robot-world transformation rely on calibration target, a calibration-free approach that solve the robot-sensor calibration problem of the form AX = YB based on Second-Order Cone Programming is proposed. First, a Structure-From-Motion approach was used to recover the camera motion matrix up to scaling. Then rotation and translation matrix in calibration equation were parameterized by dual quaternion theory. Finally, the Second-Order Cone Programming method was used to simultaneously solve the optimal solution of camera motion matrix scale factor, the robot-world and hand-eye relation. The experimental results indicate that the calibration precision of rotation relative error is 3.998% and the translation relative error is 0.117% in defect of calibration target as 3D benchmark. Compared with similar methods, the proposed method can effectively improve the calibration accuracy of the robot-world and hand-eye relation, and extend the application field of robot-sensor calibration method.
Highly accurate and easy-to-operate calibration (to determine the interior and distortion parameters) and orientation (to determine the exterior parameters) methods for cameras in large volume is a very important topic for expanding the application scope of 3D vision and photogrammetry techniques. This paper proposes a method for simultaneously calibrating, orienting and assessing multi-camera 3D measurement systems in large measurement volume scenarios. The primary idea is building 3D point and length arrays by moving a scale bar in the measurement volume and then conducting a self-calibrating bundle adjustment that involves all the image points and lengths of both cameras. Relative exterior parameters between the camera pair are estimated by the five point relative orientation method. The interior, distortion parameters of each camera and the relative exterior parameters are optimized through bundle adjustment of the network geometry that is strengthened through applying the distance constraints. This method provides both internal precision and external accuracy assessment of the calibration performance. Simulations and real data experiments are designed and conducted to validate the effectivity of the method and analyze its performance under different network geometries. The RMSE of length measurement is less than 0.25 mm and the relative precision is higher than 1/25,000 for a two camera system calibrated by the proposed method in a volume of 12 m × 8 m × 4 m. Compared with the state-of-the-art point array self-calibrating bundle adjustment method, the proposed method is easier to operate and can significantly reduce systematic errors caused by wrong scaling.
Lens distortion parameters vary with the distance between the object point and the image plane. We propose an analytical model of depth-dependent distortion for large depth-of-field digital cameras used for high accuracy photogrammetry. Compared with the magnification-dependent model, the proposed one does not need focusing operation during calibration, thus eliminates focusing errors and guarantees the stability of camera interior parameters. Compared with the widely used constant distortion parameter model, the proposed model reduces the maximum distortion variation from 8.0 μm to 0.9 μm at 20 mm radial distance when the depth changes from 2.46 m to 4.51 m for the 35 mm lens, and from 23.0 μm to 3.6 μm when the depth changes from 2.07 m to 4.17 m for the 50 mm lens. Additionally, when applied to photogrammetry bundle adjustment, the proposed model reduces length measurement standard deviation from 0.055 mm to 0.028 mm in a measurement volume of 7.0 m × 3.5 m × 2.5m compared with the constant parameter model.
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature point sets. In order to combine the global geometrical relationship and local shape feature in a new Student’s t Mixture probabilistic model framework. On the one hand, we use inner-distance shape context as the local shape descriptors of feature point sets. On the other hand, we formulate the feature point sets registration of the multi-spectral face images as the Student’s t Mixture probabilistic model estimation, and local shape descriptors are used to replace the mixing proportions of the prior Student’s t Mixture Model. Furthermore, in order to improve the anti-interference performance of face recognition techniques, a guided filtering and gradient preserving image fusion strategy is used to fuse the registered multi-spectral face image. It can make the multi-spectral fusion image hold more apparent details of the visible image and thermal radiation information of the infrared image. Subjective and objective registration experiments are conducted with manual selected landmarks and real multi-spectral face images. The qualitative and quantitative comparisons with the state-of-the-art methods demonstrate the accuracy and robustness of our proposed method in solving the multi-spectral face image registration problem.
This paper introduces an invariant Hough random ferns (IHRF) incorporating rotation and scale invariance into the local feature description, random ferns classifier training, and Hough voting stages. It is especially suited for object detection under changes in object appearance and scale, partial occlusions, and pose variations. The efficacy of this approach is validated through experiments on a large set of challenging benchmark datasets, and the results demonstrate that the proposed method outperforms state-of-the-art conventional methods such as bounding-box-based and part-based methods. Additionally, we also propose an efficient clustering scheme based on the local patches’ appearance and their geometric relations that can provide pixel-accurate, top-down segmentations from IHRF back-projections. This refined segmentation can be used to improve the quality of online object tracking because it avoids the drifting problem. Thus, an online tracking framework based on IHRF, which is trained and updated in each frame to distinguish and segment the object from the background, is established. Finally, the experimental results on both object segmentation and long-term object tracking show that this method yields accurate and robust tracking performance in a variety of complex scenarios, especially in cases of severe occlusions and nonrigid deformations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.