2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01209
|View full text |Cite
|
Sign up to set email alerts
|

Deep Single Image Camera Calibration With Radial Distortion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(45 citation statements)
references
References 18 publications
0
44
0
Order By: Relevance
“…erefore, it can be said that the detection accuracy and detection speed of the algorithm in this paper are due to the other two detection algorithms, using standard gauge blocks. After camera calibration [38,39], the actual size of pix corresponding to each pixel is 0.007291 mm, and the actual diameter D of the brake master cylinder compensation hole is calculated by the formula D � 2 * pix * n � 2 * 0.007291 * 48.105 � 0.70151. (20) Finally, using random Hough transform, gradient Hough transform, and the algorithm in this paper, the same master cylinder compensation hole size is detected 30 times, and the pixel size of each time is recorded to form the experimental data shown in Table 7.…”
Section: And Vote On Point C Imentioning
confidence: 99%
“…erefore, it can be said that the detection accuracy and detection speed of the algorithm in this paper are due to the other two detection algorithms, using standard gauge blocks. After camera calibration [38,39], the actual size of pix corresponding to each pixel is 0.007291 mm, and the actual diameter D of the brake master cylinder compensation hole is calculated by the formula D � 2 * pix * n � 2 * 0.007291 * 48.105 � 0.70151. (20) Finally, using random Hough transform, gradient Hough transform, and the algorithm in this paper, the same master cylinder compensation hole size is detected 30 times, and the pixel size of each time is recorded to form the experimental data shown in Table 7.…”
Section: And Vote On Point C Imentioning
confidence: 99%
“…CNN deep-learning approaches such as Bogdan et al [21] and Lopez et al [22] cannot rectify the distortion samples with illumination changes, and certain higher distortion ranges cannot be handled with consistency. Additionally, deep GANs such as Liao, Kang et al [23] are used for generating corresponding rectified samples for a distorted image.…”
Section: Previous Workmentioning
confidence: 99%
“…Blue to yellow means small to large values. Different from previous works of camera calibration that leverage image contents from a noise-free image (e.g., [56,16,32]), our method exploits (free from image contents, Figure 1 row 4 as the calibration cue to estimate camera parameters.…”
Section: Motivationmentioning
confidence: 99%
“…Recently, learning-based methods are proposed to deal with a single image in the wild. These methods solve for different components of calibration parameters, such as vanishing points [63] (combined with geometric based methods), FoV [56], horizon line [58] (to estimate the extrinsic rotation matrix), the radial distortion parameter [38], the extrinsic rotation matrix and FoV [16], or the extrinsic rotation matrix together with intrinsic parameters of FoV and radial distortion [32].…”
Section: Single Image Camera Calibrationmentioning
confidence: 99%
See 1 more Smart Citation