2020
DOI: 10.1007/978-3-030-58452-8_11
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsampling the Plenoptic Function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 60 publications
(50 citation statements)
references
References 64 publications
0
50
0
Order By: Relevance
“…The goal is to infer the scene structure and view-dependent appearance given a set of input views. Prior work reasons over an explicit [11,15,22,72] or discrete volumetric [20,33,36,40,44,55,64,65,67,68,80,82] representation of the underlying geometry. However, both have fundamental limitations -explicit representations often require fixing the structure's topology and have poor local optima, while discrete volumetric approaches scale poorly to higher resolutions.…”
Section: Related Workmentioning
confidence: 99%
“…The goal is to infer the scene structure and view-dependent appearance given a set of input views. Prior work reasons over an explicit [11,15,22,72] or discrete volumetric [20,33,36,40,44,55,64,65,67,68,80,82] representation of the underlying geometry. However, both have fundamental limitations -explicit representations often require fixing the structure's topology and have poor local optima, while discrete volumetric approaches scale poorly to higher resolutions.…”
Section: Related Workmentioning
confidence: 99%
“…Recent works [31,32] on view synthesis often require retraining for any testing scene. Although, they produce visually impressive novel views but they are not able to generalize to unseen data.…”
Section: Comparing With State-of-the-art Methodsmentioning
confidence: 99%
“…Recent neural rendering methods have introduced a generative model that understands the underlying 3D scene structure and faithfully produces the target view at the distant query pose [31,32,33,34]. Generative Query Network (GQN) [6] and its variant [7,8,9] are incorporating all input observation (images and poses) into a single implicit 3D scene representation to generate the target view.…”
Section: Related Workmentioning
confidence: 99%
“…Although image-based rendering methods have a big range of renderable viewpoints, they tend to be sensitive to the quality of precomputed depth maps. Recently, instead of predicting proxy geometry individually, some methods [19,21,22,39,41] propose to optimize 3D representations jointly with differentiable renderers from input images. An emerging direction is using MLPs as implicit representations, where the MLP networks map continuous spatial points to target values, including signed distance [30], occupancy [26], and radiance [28].…”
Section: Related Workmentioning
confidence: 99%