2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967677
|View full text |Cite
|
Sign up to set email alerts
|

Monocular Depth Estimation in New Environments With Absolute Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…Methods addressing this problem add 3D-geometry-based losses to introduce scale-consistency [9], [10], yet utilize at least some depth or stereo supervision to introduce scaleawareness [11], [12]. Recently [26] introduced a similar instantaneous velocity based multi-modal supervision.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Methods addressing this problem add 3D-geometry-based losses to introduce scale-consistency [9], [10], yet utilize at least some depth or stereo supervision to introduce scaleawareness [11], [12]. Recently [26] introduced a similar instantaneous velocity based multi-modal supervision.…”
Section: Related Workmentioning
confidence: 99%
“…Consequently, most of the existing methods scale the estimated relative depth using the LiDAR ground truth during evaluation. Recent methods tackling this problem utilize additional 3D geometric constraints to introduce scale-consistency [9], [10], but require at least some depth or stereo supervision to predict at metric-scale [11], [12]. Nevertheless, obtaining metric scale predictions at low cost is necessary for practical deployment.…”
Section: Introductionmentioning
confidence: 99%
“…Although these methods achieve a great success, they cannot recover the metric depth and ego-motion. Some methods [38,43,36] also try to recover real scale of the scene by leveraging prior information such as height of the camera, etc.…”
Section: Structure From Motionmentioning
confidence: 99%
“…The ability of a robot to build a consistent map during its autonomous mission, widely known as Simultaneous Localization and Mapping (SLAM) [6], is strengthened when scale information is provided as robust visual odometry is generated [7]. Thus, depth-sensing is essential in any contemporary SLAM system [8], [9]. Commonly used sensors include LiDARs, binocular vision, etc., which are expensive and massive.…”
Section: Introductionmentioning
confidence: 99%
“…1. Various approaches [11], [22], [12], [18], [23] have been developed on the NYU Depth v2 dataset [24], which is an indoor environment without humans, while the KITTI vision suite collection [25] is selected for the outdoor cases [8], [22], [13]. Thus, no suitable data-sequence was available for our method's evaluation.…”
Section: Introductionmentioning
confidence: 99%