Abstract:Generally, a surround-view system (SVS), which is an indispensable component of advanced driving assistant systems (ADAS), consists of four to six wide-angle fisheye cameras. As long as both intrinsics and extrinsics of all cameras have been calibrated, a top-down surround-view with the real scale can be synthesized at runtime from fisheye images captured by these cameras. However, when the vehicle is driving on the road, relative poses between cameras in the SVS may change from the initial calibrated states d… Show more
“…On the other hand, a robust and mature online extrinsic calibration pipeline is still lacking. Recently, some research projects such as [3][4][5][6][7][8][9][10] among others, have proposed different approaches attempting to fill this research gap. Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Recently, some research projects such as [3][4][5][6][7][8][9][10] among others, have proposed different approaches attempting to fill this research gap. Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results. Concerning [3] [8] and [9], all these projects aim to minimize the photometric discrepancy between adjacent cameras to optimize cameras' extrinsic parameters.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results. Concerning [3] [8] and [9], all these projects aim to minimize the photometric discrepancy between adjacent cameras to optimize cameras' extrinsic parameters. Moreover, all of them are based on the use of natural images to recalibrate the cameras, so any external object such as a calibration pattern is not required.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Nevertheless, there exists a research gap on how these poses can be optimally online reestimated. Recently, some projects [3][4][5][6][7][8][9][10] have proposed different solutions which aim to address this problem. However, these works disregard the practical aspects of when it is necessary to reestimate the cameras' relative poses.…”
Over the last decade, the automotive industry has introduced advanced driving assistance systems (ADAS) and automated driving (AD) features into roads to reduce fatality rates. One of these ADAS is the surround-view system, which provides an orthographic view of the vehicle by using at least four fish-eye lens cameras embedded in it. Small bumps or temperature changes may modify these cameras' relative poses leading to some geometrical mismatches between views in the top-view projection plane. In addition, terrain irregularities may misalign the orthographic view with the ground plane surface. Both problems can be solved by reestimating the relative poses of the cameras with respect to a single common point in the vehicle. This procedure, also known as recalibration, is offline performed in technical garages, or by online calibration mechanisms on engine start. However, it is a slow and cumbersome process. Research to date studies how to optimally recalibrate these cameras in an online manner, neglecting the practical aspects of when this procedure should be undertaken. Therefore, depending on the functionalities for which the embedded cameras are required, a compromise between using out-of-calibration cameras and the consequences derived from the recalibration process must be considered. This would prevent reestimating the cameras' relative poses in situations where misalignment between adjacent cameras may not be noticeable. For this reason, a novel approach that measures the degree of calibration between cameras embedded in a vehicle is proposed. This method extracts relevant features from the predefined regions of interest of each camera by using the histogram of oriented gradients (HOG) descriptor. Then, features that belong to adjacent cameras are compared by employing the cosine similarity metric. The proposed method is evaluated on the open-source AD research simulator CARLA providing detailed analysis to objectively highlight the usefulness of this method in studying the degree of calibration of a camera array in a surround-view system.
“…On the other hand, a robust and mature online extrinsic calibration pipeline is still lacking. Recently, some research projects such as [3][4][5][6][7][8][9][10] among others, have proposed different approaches attempting to fill this research gap. Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Recently, some research projects such as [3][4][5][6][7][8][9][10] among others, have proposed different approaches attempting to fill this research gap. Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results. Concerning [3] [8] and [9], all these projects aim to minimize the photometric discrepancy between adjacent cameras to optimize cameras' extrinsic parameters.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Nevertheless, [3] [8] [9] and [10] are the ones that provide the most encouraging results. Concerning [3] [8] and [9], all these projects aim to minimize the photometric discrepancy between adjacent cameras to optimize cameras' extrinsic parameters. Moreover, all of them are based on the use of natural images to recalibrate the cameras, so any external object such as a calibration pattern is not required.…”
Section: Online Extrinsic Camera Calibration Literaturementioning
confidence: 99%
“…Nevertheless, there exists a research gap on how these poses can be optimally online reestimated. Recently, some projects [3][4][5][6][7][8][9][10] have proposed different solutions which aim to address this problem. However, these works disregard the practical aspects of when it is necessary to reestimate the cameras' relative poses.…”
Over the last decade, the automotive industry has introduced advanced driving assistance systems (ADAS) and automated driving (AD) features into roads to reduce fatality rates. One of these ADAS is the surround-view system, which provides an orthographic view of the vehicle by using at least four fish-eye lens cameras embedded in it. Small bumps or temperature changes may modify these cameras' relative poses leading to some geometrical mismatches between views in the top-view projection plane. In addition, terrain irregularities may misalign the orthographic view with the ground plane surface. Both problems can be solved by reestimating the relative poses of the cameras with respect to a single common point in the vehicle. This procedure, also known as recalibration, is offline performed in technical garages, or by online calibration mechanisms on engine start. However, it is a slow and cumbersome process. Research to date studies how to optimally recalibrate these cameras in an online manner, neglecting the practical aspects of when this procedure should be undertaken. Therefore, depending on the functionalities for which the embedded cameras are required, a compromise between using out-of-calibration cameras and the consequences derived from the recalibration process must be considered. This would prevent reestimating the cameras' relative poses in situations where misalignment between adjacent cameras may not be noticeable. For this reason, a novel approach that measures the degree of calibration between cameras embedded in a vehicle is proposed. This method extracts relevant features from the predefined regions of interest of each camera by using the histogram of oriented gradients (HOG) descriptor. Then, features that belong to adjacent cameras are compared by employing the cosine similarity metric. The proposed method is evaluated on the open-source AD research simulator CARLA providing detailed analysis to objectively highlight the usefulness of this method in studying the degree of calibration of a camera array in a surround-view system.
“…Advanced Driver Assistance System (ADAS) is becoming more and more popular with various features [1] [2]. Displaying surrounding information of a vehicle for assistive parking and lowspeed maneuvering is one of the ADAS functions and of interest to both academia and industrial communities [3][4] [5]. A classical Surround View System consists of three components: (1) four fisheye c ameras m ounted s urrounding t he vehicle f or i mage c apturing; ( 2) a c entral computing unit where raw video frames from all cameras are processed; and (3) a visualization device, such as a built-in display, where the rendered results from specified p erspectives a re s hown t o the driver.…”
Providing blind-spot-free vehicle surround view to the driver is important for many driving maneuvers such as parking. Existing vehicle Surround View System (SVS) can only visualize front, left, rear and right side of the vehicle but leaves the under vehicle area unknown. However, perceiving the under vehicle area is critical for many tasks such as passing through speed bumps, avoiding potholes, driving on narrow roads with high curbs or the unpaved terrain. In this paper, we propose a novel Under Vehicle Reconstruction (UVR) algorithm which utilizes what the vehicle sees in the past and vehicle egomotion to “see” through the original invisible under vehicle area. First, front or back fisheye cameras, are utilized to build a local textured map for future usage. Second, vehicle’s precise location and orientation within the local map is estimated using the vehicle egomotion. Finally, correspondent under vehicle area texture is retrieved from the map using vehicle’s pose and stitched together with traditional Surround View System to provide a new blind-spot-free visualization. As far as we know, our work is the first solution that can provide full under vehicle area reconstruction which empowers many Advanced Driving Assistant System (ADAS) functionalities such as transparent hood or transparent vehicle. Experiments on both simulated and real data are presented to show the effectiveness and robustness of the proposed algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.