Abstract:This paper presents a novel approach for an online initial camera calibration to estimate the extrinsic parameters for vision-based intelligent driver assistance systems. The method uses the periodicity of dashed lane markings and velocity information to determine the extrinsic camera parameters: height, pitch and roll angle. A lane marking detector is utilized to convert the images of road scenes into a set of onedimensional time series. Thereby, the lane marking detector samples the markings at predefined ve… Show more
“…However, this approach cannot work on off-road situations where lane markings are not presented, and its accuracy can be degraded when lane markings are worn. Hold et al [ 19 ] calibrated extrinsic parameters of the front view camera. This method detects dashed lane markings at predefined vertical coordinates, and calculates the extrinsic parameters by analyzing the detected lane markings and the measured vehicle velocity.…”
Section: Related Researchmentioning
confidence: 99%
“…One vanishing point is obtained from left and right lane markings and the other one is obtained from lines connecting the corners of the lane segments. These three methods [ 19 , 20 , 21 ] require dashed lane markings, and two of them [ 20 , 21 ] need accurate vehicle speed synchronized with the camera. Ribeiro et al [ 22 ] proposed a method similar to the method in [ 20 ].…”
This paper proposes a method that automatically calibrates four cameras of an around view monitor (AVM) system in a natural driving situation. The proposed method estimates orientation angles of four cameras composing the AVM system, and assumes that their locations and intrinsic parameters are known in advance. This method utilizes lane markings because they exist in almost all on-road situations and appear across images of adjacent cameras. It starts by detecting lane markings from images captured by four cameras of the AVM system in a cost-effective manner. False lane markings are rejected by analyzing the statistical properties of the detected lane markings. Once the correct lane markings are sufficiently gathered, this method first calibrates the front and rear cameras, and then calibrates the left and right cameras with the help of the calibration results of the front and rear cameras. This two-step approach is essential because side cameras cannot be fully calibrated by themselves, due to insufficient lane marking information. After this initial calibration, this method collects corresponding lane markings appearing across images of adjacent cameras and simultaneously refines the initial calibration results of four cameras to obtain seamless AVM images. In the case of a long image sequence, this method conducts the camera calibration multiple times, and then selects the medoid as the final result to reduce computational resources and dependency on a specific place. In the experiment, the proposed method was quantitatively and qualitatively evaluated in various real driving situations and showed promising results.
“…However, this approach cannot work on off-road situations where lane markings are not presented, and its accuracy can be degraded when lane markings are worn. Hold et al [ 19 ] calibrated extrinsic parameters of the front view camera. This method detects dashed lane markings at predefined vertical coordinates, and calculates the extrinsic parameters by analyzing the detected lane markings and the measured vehicle velocity.…”
Section: Related Researchmentioning
confidence: 99%
“…One vanishing point is obtained from left and right lane markings and the other one is obtained from lines connecting the corners of the lane segments. These three methods [ 19 , 20 , 21 ] require dashed lane markings, and two of them [ 20 , 21 ] need accurate vehicle speed synchronized with the camera. Ribeiro et al [ 22 ] proposed a method similar to the method in [ 20 ].…”
This paper proposes a method that automatically calibrates four cameras of an around view monitor (AVM) system in a natural driving situation. The proposed method estimates orientation angles of four cameras composing the AVM system, and assumes that their locations and intrinsic parameters are known in advance. This method utilizes lane markings because they exist in almost all on-road situations and appear across images of adjacent cameras. It starts by detecting lane markings from images captured by four cameras of the AVM system in a cost-effective manner. False lane markings are rejected by analyzing the statistical properties of the detected lane markings. Once the correct lane markings are sufficiently gathered, this method first calibrates the front and rear cameras, and then calibrates the left and right cameras with the help of the calibration results of the front and rear cameras. This two-step approach is essential because side cameras cannot be fully calibrated by themselves, due to insufficient lane marking information. After this initial calibration, this method collects corresponding lane markings appearing across images of adjacent cameras and simultaneously refines the initial calibration results of four cameras to obtain seamless AVM images. In the case of a long image sequence, this method conducts the camera calibration multiple times, and then selects the medoid as the final result to reduce computational resources and dependency on a specific place. In the experiment, the proposed method was quantitatively and qualitatively evaluated in various real driving situations and showed promising results.
“…These patterns may be located on the ground [ 15 , 16 ] or painted on the hood of the vehicle [ 17 ]. Road marks: Secondly, the calibration process is performed by means of road marks [ 18 ], such as lines [ 19 , 20 , 21 ] or dashed lines on the roadway [ 22 ], it being possible to use the parking lines as the calibration pattern [ 23 ]. These methods allow the calibration process, where it is possible to recalculate the extrinsic parameters at different times and positions.…”
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.
“…(2) Existing solutions which are feasible to the surround-view case mostly require relatively ideal environments. For example, the approaches proposed in [2,4,13,21,28] require that, on the ground, there must be two parallel lane-lines that can be clearly detected. Thus, they usually have noticeable limitations in both the usability and the generalization capability.…”
Generally, a surround-view system (SVS), which is an indispensable component of advanced driving assistant systems (ADAS), consists of four to six wide-angle fisheye cameras. As long as both intrinsics and extrinsics of all cameras have been calibrated, a top-down surround-view with the real scale can be synthesized at runtime from fisheye images captured by these cameras. However, when the vehicle is driving on the road, relative poses between cameras in the SVS may change from the initial calibrated states due to bumps or collisions. In case that extrinsics' representations are not adjusted accordingly, on the surround-view, obvious geometric misalignment will appear. Currently, the researches on correcting the extrinsics of the SVS in an online manner are quite sporadic, and a mature and robust pipeline is still lacking. As an attempt to fill this research gap to some extent, in this work, we present a novel extrinsics correction pipeline designed specially for the SVS, namely ROECS (Robust Online Extrinsics Correction of the Surround-view system). Specifically, a "refined bi-camera error" model is firstly designed. Then, by minimizing the overall "bi-camera error" within a sparse and semi-direct framework, the SVS's extrinsics can be iteratively optimized and become accurate eventually. Besides, an innovative three-step pixel selection strategy is also proposed. The superior robustness and the generalization capability of ROECS are validated by both quantitative and qualitative experimental results. To make the results reproducible, the collected data and the source code have been released at https://cslinzhang.github.io/ROECS/.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.