2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341448
|View full text |Cite
|
Sign up to set email alerts
|

Deep Depth Estimation from Visual-Inertial SLAM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 38 publications
1
23
0
Order By: Relevance
“…Depth completion: One approach towards dense depth estimation from multiple views is to: (i) Create a sparse point cloud (by tracking distinct 2D points across images and triangulating their 3D positions) and then (ii) Employ a depth-completion neural network that takes the sparse depth image along with the RGB image as inputs and exploits the scene's context to create a dense-depth estimate (e.g., [6], [7], [8], [9], [10]). Although these approaches have relatively low processing requirements, they are typically sensitive to the inaccuracies and sparsity level of their depth input; thus, they often fail to produce accurate depth estimates in textureless regions that lack sparse depth information.…”
Section: Related Work Multi-view Depth-estimation Methods Can Be Clas...mentioning
confidence: 99%
See 1 more Smart Citation
“…Depth completion: One approach towards dense depth estimation from multiple views is to: (i) Create a sparse point cloud (by tracking distinct 2D points across images and triangulating their 3D positions) and then (ii) Employ a depth-completion neural network that takes the sparse depth image along with the RGB image as inputs and exploits the scene's context to create a dense-depth estimate (e.g., [6], [7], [8], [9], [10]). Although these approaches have relatively low processing requirements, they are typically sensitive to the inaccuracies and sparsity level of their depth input; thus, they often fail to produce accurate depth estimates in textureless regions that lack sparse depth information.…”
Section: Related Work Multi-view Depth-estimation Methods Can Be Clas...mentioning
confidence: 99%
“…Specifically, approaches such as [4], [5] predict dense depth from a single image by taking advantage of images' contextual cues learned from large datasets; hence, they rely less on texture, as compared to classical methods. Moreover, to overcome the scale issue of singleview methods, depth-completion networks (e.g., [6], [7], [8], [9], [10]) leverage sparse point clouds from classical methods and complete the dense depth map using single-view cues. In order to further exploit multi-view information, depthestimation networks taking multiple images as input have also been considered.…”
Section: Introductionmentioning
confidence: 99%
“…Whereas, [8,7,32,33] learned uncertainty of estimates, [41] leveraged confidence maps, and [31,48,50] used surface normals for guidance. Like us, [27,36,52] proposed lightweight networks that can be deployed onto SLAM/VIO systems.…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…To integrate visual odometry (VO) or the SLAM system into depth estimation, the authors of [10,12,13] presented a neural network to correct classical VO estimators in a selfsupervised manner and enhance geometric constraints. Self-supervised depth estimation, using the pose and depth between two adjacent frames, establishes a depth reprojection error and image reconstruction error [14][15][16][17]. In a monocular depth self-supervised estimation, the depth value estimated by the depth estimation network (DepthNet) and the pose between adjacent images have a decisive influence on the depth estimation result.…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by these observations, we present a new deep visual-inertial odometry (DeepVIO) based ego-motion and depth prediction system that combines the strengths of learning-based VIO and geometrical depth estimation [16,19,20]. It uses DeepVIO geometrical constraints [21], where they are available, to achieve accurate odometry fusing with raw inertial measurement unit (IMU) data and sparse point clouds.…”
Section: Introductionmentioning
confidence: 99%