2022
DOI: 10.1007/978-3-031-19821-2_38
|View full text |Cite
|
Sign up to set email alerts
|

RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 33 publications
0
11
0
Order By: Relevance
“…In order to verify the effectiveness of the model, this paper visualizes the results of the open-source models MVSNet [4] and RC-MVSNet [7] , and compares the 3D reconstruction effect of this paper's model and other models in typical scenes of the DTU dataset, and the results are shown in Fig. 2.…”
Section: Results On Dtu Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to verify the effectiveness of the model, this paper visualizes the results of the open-source models MVSNet [4] and RC-MVSNet [7] , and compares the 3D reconstruction effect of this paper's model and other models in typical scenes of the DTU dataset, and the results are shown in Fig. 2.…”
Section: Results On Dtu Datasetmentioning
confidence: 99%
“…Comp. Overall Sup SurfaceNet [3] 0.450 1.040 0.745 MVSNet [4] 0.396 0.527 0.462 R-MVSNet [3] 0.389 0.455 0.382 Cas-MVSNet [3] 0.325 0.385 0.355 CVP-MVSNet [3] 0.296 0.406 0.351 Unsup Unsup-MVSNet [3] MVS2 [3] JDACS [3] 0.881 0.760 0.398 1.073 0.515 0.318 0.977 0.637 0.358 RC-MVSNet [7] 0.396 0.295 0.345 TRC-MVSNet 0.366 0.297 0.332…”
Section: Results On Dtu Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Our depth estimation network uses the same architecture as proposed by Deschaintre et al [DLG21], with a novel depth‐based rendering loss. Unlike the depth rendering loss recently proposed by Chang et al [CBZ*22] which uses a NERF style volumetric depth rendering, we solely utilize the predicted depth as the input for rendering and subsequently transform this predicted depth into surface normals. We designate specific parameters of the BRDF, such as roughness, to constant values in accordance with the conventional Cook‐Torrance model and calculate the rendering results loss between the ground truth rendering.…”
Section: Methodsmentioning
confidence: 99%
“…Early data-driven MVS techniques include SurfaceNet [21,22], which combines all pixel and camera information into colored voxel cubes used as input of the network, and LSM [18], which uses differentiable projection for end-to-end training. MVSNet [23,24] infers the depth maps for each image one at a time using multiple source images and uses differentiable homography warping to enable end-to-end training. These methods have leveraged DNNs to obtain high-quality reconstructions, and recent works, such as R-MVSNet [25], put the focus on scalability.…”
Section: Related Workmentioning
confidence: 99%