2022 International Conference on 3D Vision (3DV) 2022
DOI: 10.1109/3dv57658.2022.00074
|View full text |Cite
|
Sign up to set email alerts
|

A Benchmark and a Baseline for Robust Multi-view Depth Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
0
0
Order By: Relevance
“…For MVD, we compare with famous classic COLMAP [55], [56] and Deep learning based methods, such as MVS-Net [33], Vis-MVSSNet [37], MVS2D [57], DeMon [28], DeepV2D [30], Robust MVD baseline [31], DUSt3R [42] and more. In particular, the latest fully supervised model DUSt3R [42] supports the closest to us.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…For MVD, we compare with famous classic COLMAP [55], [56] and Deep learning based methods, such as MVS-Net [33], Vis-MVSSNet [37], MVS2D [57], DeMon [28], DeepV2D [30], Robust MVD baseline [31], DUSt3R [42] and more. In particular, the latest fully supervised model DUSt3R [42] supports the closest to us.…”
Section: Methodsmentioning
confidence: 99%
“…Not only operating on pair of images, DeepTAM [29] and DeepV2D [30] process more images with alternating mapping and tracking modules. However, according to [31], such methods are overfitted to the trained scale and camera parameters. Which are difficult to generalize to arbitrary real-world applications.…”
Section: Multi-view Depth Estimationmentioning
confidence: 99%
See 2 more Smart Citations