2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569665
|View full text |Cite
|
Sign up to set email alerts
|

Monocular Fisheye Camera Depth Estimation Using Sparse LiDAR Supervision

Abstract: Near-field depth estimation around a self-driving car is an important function that can be achieved by four wide-angle fisheye cameras having a field of view of over 180 • . Depth estimation based on convolutional neural networks (CNNs) produce state of the art results, but progress is hindered because depth annotation cannot be obtained manually. Synthetic datasets are commonly used but they have limitations. For instance, they do not capture the extensive variability in the appearance of objects like vehicle… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3
1

Relationship

4
6

Authors

Journals

citations
Cited by 43 publications
(26 citation statements)
references
References 25 publications
0
26
0
Order By: Relevance
“…For instance, Menze et al [25] running time is 50 minutes per frame which makes it impossible for usage in a real-time application such as the autonomous driving. Deep learning algorithms are becoming successful beyond object detection [30] for applications like visual SLAM [26], depth estimation [17], soiling detection [33] but it is still relatively less explored for MOD task.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Menze et al [25] running time is 50 minutes per frame which makes it impossible for usage in a real-time application such as the autonomous driving. Deep learning algorithms are becoming successful beyond object detection [30] for applications like visual SLAM [26], depth estimation [17], soiling detection [33] but it is still relatively less explored for MOD task.…”
Section: Related Workmentioning
confidence: 99%
“…dynamic pixels. Depth task provides scaleaware distance in 3D space validated by occlusion corrected LiDAR depth [16]. The model is trained jointly using the public WoodScape [17] dataset comprising 8k samples and evaluated on 2k samples.…”
Section: Supervised Trainingmentioning
confidence: 99%
“…They typically estimate a dense depth map from raw LiDAR measurements. Current deeplearning based methods for depth completion (Xu et al, 2019;Tang et al, 2020;Jaritz et al, 2018;Park et al, 2020;Ma and Karaman, 2018;Kumar et al, 2018;Zhao et al, 2021) usually learn to regress ground-truth depth maps in a fully-supervised setup (Figure 1b). Such approaches generally operate over visual and LiDAR inputs.…”
Section: Background and Related Workmentioning
confidence: 99%