2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00383
|View full text |Cite
|
Sign up to set email alerts
|

Long-range Attention Network for Multi-View Stereo

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Comp. Overall Gipuma [8] 0.283 0.873 0.578 COLMAP [24], [25] 0.400 0.664 0.532 MVSNet [36] 0.396 0.527 0.462 R-MVSNet [37] 0.383 0.452 0.417 CasMVSNet [10] 0.346 0.351 0.348 PatchmatchNet [29] 0.427 0.277 0.352 D 2 HC-RMVSNET [34] 0.395 0.378 0.386 EPP-MVSNet [20] 0.413 0.296 0.355 AA-RMVSNet [31] 0.376 0.339 0.357 AACVP-MVSNet [39] 0.357 0.326 0.341 AttMVS [19] 0.383 0.329 0.356 LANet [41] 0.320 0.349 0.335 Ours 0.278 0.377 0.327 be found in Table II, where our approach ranks amongst the top published methods, while keeping runtime and GPU memory requirements low. Furthermore we compare our method qualitatively to EPP-MVSNet [20], ranked highest in Table II, in Figure 6.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Comp. Overall Gipuma [8] 0.283 0.873 0.578 COLMAP [24], [25] 0.400 0.664 0.532 MVSNet [36] 0.396 0.527 0.462 R-MVSNet [37] 0.383 0.452 0.417 CasMVSNet [10] 0.346 0.351 0.348 PatchmatchNet [29] 0.427 0.277 0.352 D 2 HC-RMVSNET [34] 0.395 0.378 0.386 EPP-MVSNet [20] 0.413 0.296 0.355 AA-RMVSNet [31] 0.376 0.339 0.357 AACVP-MVSNet [39] 0.357 0.326 0.341 AttMVS [19] 0.383 0.329 0.356 LANet [41] 0.320 0.349 0.335 Ours 0.278 0.377 0.327 be found in Table II, where our approach ranks amongst the top published methods, while keeping runtime and GPU memory requirements low. Furthermore we compare our method qualitatively to EPP-MVSNet [20], ranked highest in Table II, in Figure 6.…”
Section: Methodsmentioning
confidence: 99%
“…In the task of object detection and image classification, the attention mechanism has been successful in achieving gains by augmenting convolutional models with content-based interactions [3]. This motivated several works [19], [41] to capitalize on the new technique also in MVS. A first attempt to exploit the local attention layer proposed by [21] is performed by Yu et al [39].…”
Section: Related Workmentioning
confidence: 99%
“…The attention [39][40][41] is also proven to be able to extract representative features from ambiguous regions. Finally, the fused light field features will go through another spatial-angular regularisation module to implicitly regularise the structure of LF contents:…”
Section: Fusementioning
confidence: 99%
“…CasMVSNet [19] adopts a cascade cost volume to gradually narrow the depth range and increase the cost volume resolution. Similar ideas are later explored to reduce the memory cost of 3D convolutions and/or increase the depth quality, such as coarse-to-fine depth optimization [10], [39], [68], [69], [71], [72], [79], attention-based feature aggregation [38], [66], [78], [84], and patch matching-based method [37], [62]. Unlike these works, RayMVSNet optimizes the depth on each camera viewing ray instead of the 3D volume, which is more light-weight.…”
Section: Related Workmentioning
confidence: 99%