Cameras are getting smaller and cheaper. So a cost-effective usage of multi-camera systems in vehicles gets more and more attractive. In many cases it is desirable to cover the whole environment all around the vehicle. But often design restrictions and energy consumption do not allow constellations of cameras with overlapping field of views. However, for a common and geometric usage of the extracted information, e. g. in structure from motion tasks, it is necessary to know the relative alignment of the cameras, which are the extrinsic calibration parameters. This paper adresses the extrinsic calibration of a multi camera rig with non-overlapping field of views on a mobile platform. As the field of views do not necessarily overlap, common calibration methods based on corresponding image points between the camera views will fail. This problem can be overcome by using the mobiltiy of the platform. A patternbased method for extrinsic calibration of the camera rig on a mobile plattform is presented
This paper adresses the issue of calibrating multiple cameras on a mobile platform. Due to decreasing sensor prices and increasing processing performance, the use of multiple cameras in vehicles becomes an attractive possibility for environment perception. To avoid restrictions relating to the camera arrangement, we focus on non-overlapping camera configurations. Hence, we resign the usage of corresponding features between the cameras. The hand-eye calibration technique based on visual odometry is basically able to solve this problem by exploiting the cameras' motions. However, this technique suffers from inaccuracies in motion estimation. Especially the absolute magnitudes of the translational velocities of each camera are essential for a successful calibration. This contribution presents a novel approach to solve the handeye calibration problem for two cameras on a mobile platform with non-overlapping fields of view. The so-called motion adjustment simultaneously estimates the extrinsic parameters up to scale as well as the reltive motion magnitudes. Results with simulated and real data are presented.
In this contribution a robust approach for the estimation of the camera motion is presented. For this purpose, features from a monocular image sequence are extracted and evaluated so that the threedimensional path of a moving camera can be calculated. The algorithm gives robust results even in the presence of noise and independently moving objects. The two different categories of constraint equations used in the proposed algorithm are the epipolar constraint and the trilinear constraints. The optimization of the constraints with respect to the motion parameters is implemented as a robust Iterated Extended Kalman Filter. Test results are presented from real data, captured from a moving vehicle in an urban scenario.
The use of multiple cameras in vehicles becomes more and more attractive as hardware prices decrease rapidly. Multiple camera sensors can be used to cover the whole environment of a vehicle and for 3D scene reconstruction using stereo or structure from motion techniques. To be able to use all sensor informations in a common coordinate frame, it is necessary to know the relative positions and orientations of the cameras. Calibration procedures (offline or online) determine these parameters using point correspondences between the camera images. However, camera configurations to monitor the entire area around a vehicle often have non-overlapping fields of view due to cost reasons. In that case, common techniques based on corresponding image points are no more applicable. This contribution outlines a concept to perform an online calibration of multiple cameras on a mobile platform with non-overlapping fields of view. We use the motion of the cameras and local image features to define constraints that allow for the calculation of the calibration parameters
Structure from motion refers to a technique to obtain 3D information from consecutive images taken with a moving monocular camera. In order to do this, the camera motion performed between two consecutive images needs to be known.In the work reported in this contribution, we investigated the precision of the odometry data of a commercially available passenger car.In order to identify the required precision, we developed an error model based on camera parameters and the bicycle model. We investigated two options, both being based on speed measurements. The first one uses steering angle measurements, the second one uses measurements of the yaw rate.Concluding, we found out that the specified precision of all odometry data available is sufficient to solve structure from motion. Long-term measurements empirically confirm the precision values given in the specification.This result encouraged us to actually implement a structure-from-motion approach which yields depth information as predicted from the theoretical considerations.Further work needs to be carried out in order to compensate for roll motions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.