2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00177
|View full text |Cite
|
Sign up to set email alerts
|

Mind the Gap - A Benchmark for Dense Depth Prediction Beyond Lidar

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…The basic principle of depth enabled dataset construction is to use a device with reliable depth estimation capability, that won't be available during evaluation. Typically, we can use a rig with a RGB camera and a depth sensor like structured light [21], Time of Flight, embedded Lidar [34,35], or light-field camera grids [27]. For evaluation, only the camera will be available, and the evaluation step will then measure the agreement between "reliable depth" measured by the dedicated sensor and estimated depth.…”
Section: Constructing a Depth Enabled Datasetmentioning
confidence: 99%
“…The basic principle of depth enabled dataset construction is to use a device with reliable depth estimation capability, that won't be available during evaluation. Typically, we can use a rig with a RGB camera and a depth sensor like structured light [21], Time of Flight, embedded Lidar [34,35], or light-field camera grids [27]. For evaluation, only the camera will be available, and the evaluation step will then measure the agreement between "reliable depth" measured by the dedicated sensor and estimated depth.…”
Section: Constructing a Depth Enabled Datasetmentioning
confidence: 99%