2023
DOI: 10.1016/j.cviu.2022.103601
|View full text |Cite
|
Sign up to set email alerts
|

LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…The acquisition of accurate scale information directly from monocular images poses a common challenge for researchers, often necessitating the use of lidar [ 20 , 21 , 22 ] or stereo cameras [ 23 , 24 , 25 , 26 , 27 ] for depth estimation. Among stereo methodologies, Mo [ 24 ] proposed a fusion method that combines the advantages of tightly coupled depth sensors and stereo cameras to complete dense depth estimation, thereby achieving better depth estimation.…”
Section: Methodsmentioning
confidence: 99%
“…The acquisition of accurate scale information directly from monocular images poses a common challenge for researchers, often necessitating the use of lidar [ 20 , 21 , 22 ] or stereo cameras [ 23 , 24 , 25 , 26 , 27 ] for depth estimation. Among stereo methodologies, Mo [ 24 ] proposed a fusion method that combines the advantages of tightly coupled depth sensors and stereo cameras to complete dense depth estimation, thereby achieving better depth estimation.…”
Section: Methodsmentioning
confidence: 99%
“…In monocular setups for estimating depth, in autonomous systems, based on the vision concept, the depth of the dense type can be attained using either auxiliary input from one or more expensive LiDARs, of 64 beams, or only cameras, of which the method suffers from ambiguity in terms of scaling and infinite-depth type problems. Considering the above fact, in "LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR" [121], an innovative method of dense estimation of metric depth has been proposed which fuses the method of a monocular camera with that of LiDAR of the light-weight type. Basically, four previous architectures are refurbished and the performance difference between the self-supervised single-image depth prediction and the fully supervised depth completion on KITTI is greatly reduced.…”
Section: Self-supervised Monocular Modelsmentioning
confidence: 99%
“…Monocular cameras typically obtain depth information of the captured object through motion using structure from motion [16]. Bartoccioni et al [17] proposed a monocular camera and simple LiDAR fusion method. The LiDAR is inexpensive and only contains four scan lines.…”
Section: Monocular Depth Estimationmentioning
confidence: 99%