2022
DOI: 10.1007/s11263-022-01697-3
|View full text |Cite
|
Sign up to set email alerts
|

Vis-MVSNet: Visibility-Aware Multi-view Stereo Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 83 publications
(35 citation statements)
references
References 51 publications
0
11
0
Order By: Relevance
“…(mm) Comp. (mm) Overall(mm) COLMAP [15] 0.400 0.664 0.532 MVSNet [7] 0.396 0.527 0.462 R-MVSNet [10] 0.385 0.459 0.422 D 2 HC-RMVSNet [11] 0.395 0.378 0.386 PointMVSNet [16] 0.342 0.411 0.376 Vis-MVSNet [9] 0.369 0.361 0.365 AA-RMVSNet [1] 0.376 0.339 0.357 CasMVSNet [8] 0.325 0.385 0.355 EPP-MVSNet [17] 0.413 0.296 0.355 PatchmatchNet [18] 0.427 0.277 0.352 UCS-Net [19] 0.338 0.349 0.344 Ours+CasMVSNet 0.323(-0.003) 0.347(-0.038) 0.335(-0.020) the mean absolute distance of all pairs is the method of Accuracy measures, while the Completeness measures is the opposite. At last, the overall performance of models is the average of Accuracy and Completeness measures.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…(mm) Comp. (mm) Overall(mm) COLMAP [15] 0.400 0.664 0.532 MVSNet [7] 0.396 0.527 0.462 R-MVSNet [10] 0.385 0.459 0.422 D 2 HC-RMVSNet [11] 0.395 0.378 0.386 PointMVSNet [16] 0.342 0.411 0.376 Vis-MVSNet [9] 0.369 0.361 0.365 AA-RMVSNet [1] 0.376 0.339 0.357 CasMVSNet [8] 0.325 0.385 0.355 EPP-MVSNet [17] 0.413 0.296 0.355 PatchmatchNet [18] 0.427 0.277 0.352 UCS-Net [19] 0.338 0.349 0.344 Ours+CasMVSNet 0.323(-0.003) 0.347(-0.038) 0.335(-0.020) the mean absolute distance of all pairs is the method of Accuracy measures, while the Completeness measures is the opposite. At last, the overall performance of models is the average of Accuracy and Completeness measures.…”
Section: Methodsmentioning
confidence: 99%
“…MVSNet [7] transforms the MVS task to a per-view depth map estimation task that encodes camera parameters via differentiable homography to build 3D cost volumes, which will be regularized by 3D CNN to obtain a probability volume and final depth. Inspired by MVSNet [7], [8,9] follow this design paradigm. However, 3D U-Net architecture costs a lot of memory and runtime for cost volume regularization.…”
Section: Learning Based Multi-view Stereomentioning
confidence: 99%
“…Previous works adopted various solutions to learn mutual correlations [85], avoiding the influences of incorrect matches caused by occlusion. Popular solutions include the visibility-based aggregation [8], [80], the attention-based aggregation [66], [73], [77], etc. RayMVSNet follows the attention-based aggregation route.…”
Section: Related Workmentioning
confidence: 99%
“…The fact that even with every day photogrammetric sensors we can retrieve point cloud representations is of huge impact. The clouds can be calculated from multiple un-referenced images of various point of views inside a scenery (Kuhn et al, 2020;Zhang et al, 2020). Even monocular methods for producing highly accurate depth maps from single photographies exist (Alhashim and Wonka, 2018;Yuan et al, 2022).…”
Section: Technological Outlookmentioning
confidence: 99%