2020
DOI: 10.48550/arxiv.2008.00092
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Depth Estimation from Visual-Inertial SLAM

Abstract: This paper addresses the problem of learning to complete a scene's depth from sparse depth points and images of indoor scenes. Specifically, we study the case in which the sparse depth is computed from a visual-inertial simultaneous localization and mapping (VI-SLAM) system. The resulting point cloud has low density, it is noisy, and has nonuniform spatial distribution, as compared to the input from active depth sensors, e.g., LiDAR or Kinect. Since the VI-SLAM produces point clouds only over textured areas, w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…The fused feature maps are passed to an encoder-decoder network to generate the output. Sartipi et al [141] use RGB images, learned surface normals and sparse depth from visual-inertial SLAM (VI-SLAM) to infer dense depth maps. Since the depth map from VI-SLAM is more sparse, a sparse-depth enrichment step is performed to increase its density.…”
Section: B Sparse Depth Map From Slammentioning
confidence: 99%
“…The fused feature maps are passed to an encoder-decoder network to generate the output. Sartipi et al [141] use RGB images, learned surface normals and sparse depth from visual-inertial SLAM (VI-SLAM) to infer dense depth maps. Since the depth map from VI-SLAM is more sparse, a sparse-depth enrichment step is performed to increase its density.…”
Section: B Sparse Depth Map From Slammentioning
confidence: 99%