The Ladybug5 is an integrated, multi-camera system that features a near spherical field of view. It is commonly deployed on mobile mapping systems to collect imagery for 3D reality capture. This paper describes an approach for the geometric modelling and self-calibration of this system. The collinearity equations of the pinhole camera model are augmented with five radial lens distortion terms to correct the severe barrel distortion. Weighted relative orientation stability constraints are added to the self-calibrating bundle adjustment solution to enforce the angular and positional stability between the Ladybug5's six cameras. Results are presented from two calibration datasets and an independent dataset for accuracy assessment. It is demonstrated that centimetre-level 3D reconstruction accuracy can be achieved with the proposed approach. Moreover, the effectiveness of the lens distortion modelling is demonstrated. Image-space precision and object-space accuracy are improved by 92% and 93%, respectively, relative to a two-term model. The high correlations between lens distortion coefficients were not found to be detrimental to the solution. The mechanical stability of the system was assessed by comparing calibrations taken before and after ten months of routine camera system use. The results suggest sub-pixel interior orientation stability and millimetre-level relative orientation stability. Analyses of accuracy and parameter correlation demonstrate that a slightly relaxed weighting strategy is preferred to tightly-enforced relative orientation stability constraints.
Abstract. Chromatic aberration in colour digital camera imagery can affect the accuracy of photogrammetric reconstruction. Both longitudinal and transverse chromatic aberrations can be effectively modelled by making separate measurements in each of the blue, green and red colour bands and performing a specialized self-calibrating bundle adjustment. This paper presents the results of an investigation with two aims. The first aim is to quantify the presence of chromatic aberration in two sets of cameras: the six individual cameras comprising a Ladybug5 system, calibrated simultaneously in air; and four GoPro Hero 5 cameras calibrated independently under water. The second aim is to investigate the impacts of imposing different constraints in the self-calibration adjustment. To this end, four different adjustment cases were performed for all ten cameras: independent adjustment of the observations from each colour band; combined adjustment of all colour bands’ observations with common object points; combined adjustment of all colour bands with common object points and common exterior orientation parameters for each colour band triplet; and combined adjustment with common object points and certain common interior orientation parameters. The results show that the Ladybug5 cameras exhibit a small (1-2 pixel) amount of transverse chromatic aberration but no longitudinal chromatic aberration. The GoPro Hero 5 cameras exhibit significant (25 pixel) transverse chromatic aberration as well as longitudinal chromatic aberration. The principal distance was essentially independent of the adjustment case for the Ladybug5, but it was not for the GoPro Hero 5. The principal point position and precision were both affected considerably by adjustment case. Radial lens distortion was invariant to the adjustment case. The impact of adjustment case on decentring distortion was minimal in both cases.
Terrestrial light detection and ranging (LiDAR) data can be acquired from either static or mobile platforms. The latter presents some challenges in terms of resolution and accuracy, but the opportunity to cover a larger region and re peat surveys often prevails in practice. This paper presents a machine learning algorithm (MLA) for automated lithological classification of individual points within LiDAR point clouds based on intensity and geometry information. Two example data sets were collected by static and mobile platforms in an oil sands pit mine and the MLA was trained to distinguish sandstone and mud stone laminations. The type of approach presented here has the potential to be developed and applied for geological mapping applications such as reser voir characterization or underground excavation face mapping.
The use of consumer cameras fitted with extreme wide‐angle (EWA) lenses for photogrammetric measurement is increasing. Conventional modelling of EWA systems relies on the pinhole camera model and up to five radial lens distortion terms. Aiming to reduce model complexity, this paper reports on an investigation into an alternate approach using fisheye lens models for EWA systems, despite them not falling strictly into the fisheye category. Four fisheye models were tested on four different cameras under laboratory conditions. The self‐calibration results show superior model fit for all fisheye models over the pinhole‐plus‐radial model in terms of residual RMS. The number of radial distortion terms required for the fisheye models was lower in all cases, so model complexity was reduced. Independent assessment revealed very similar 3D reconstruction accuracy for all models. The results suggest that fisheye modelling is an advantageous alternative for EWA lens systems.
<p><strong>Abstract.</strong> In this work, a new method is developed for the automatic and accurate detection and labelling of signalized, un-coded circular targets for the purpose of automated camera calibration in a test field. The only requirements of this method are the approximate height of the camera, an approximate range of orientations of the camera, and the object-space coordinates of the targets. In each image, circular targets are detected using adaptive thresholding and robust ellipse fitting. Labelling of those targets is performed next. First, the exterior orientation parameters of the image are estimated using a one-point pose-estimation approach, where a list of possible orientation and target labels are used, along with height, to calculate the camera position. The estimated position and orientation of the camera combined with the interior orientation parameters (IOPs) are then used to back-project the known object-space coordinates of the targets into the image space. These targets are then matched against the targets detected in the image, and the list entry with the best fit is chosen as the solution. This resolves both the detection and labelling of the targets, without the need for any coded targets or their associated software packages, and each image is solved independently allowing for parallel processing. This process accurately labels 92&ndash;97% of images, with average accuracy rates of 97% or better, and average completeness rates of 70&ndash;95% in imagery from the three cameras tested. The cameras were calibrated using observations from the detection and labelling process, which resulted in sub-pixel root mean square (RMS) values determined for the pixel space residuals.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.