2018
DOI: 10.1007/978-3-030-01219-9_10
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view to Novel View: Synthesizing Novel Views With Self-learned Confidence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
75
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 104 publications
(85 citation statements)
references
References 33 publications
1
75
0
Order By: Relevance
“…Though trained with 4 input images, we demonstrate that our networks can infer high-quality target images using fewer input images at test time. Using the experimental protocol of Sun et al 2018 [32], which uses up to 4 input images to infer a target image, we report quantitative results for our approach and others that can use multiple input images [32,33,45], as well as for an approach accepting single inputs [25].…”
Section: Novel View Synthesismentioning
confidence: 99%
See 2 more Smart Citations
“…Though trained with 4 input images, we demonstrate that our networks can infer high-quality target images using fewer input images at test time. Using the experimental protocol of Sun et al 2018 [32], which uses up to 4 input images to infer a target image, we report quantitative results for our approach and others that can use multiple input images [32,33,45], as well as for an approach accepting single inputs [25].…”
Section: Novel View Synthesismentioning
confidence: 99%
“…This demonstrates that our method is able to generalize to intermediate poses not seen during training. In contrast, for their ShapeNet evaluations, [32] uses one-hot vectors indicating the discrete azimuth and elevation intervals at which the source images were rendered, and the specified pose for the target image. It is thus unclear how or whether their method would be able to generalize to intermediate poses not used for training.…”
Section: A411 Shapenet Chairs and Carsmentioning
confidence: 99%
See 1 more Smart Citation
“…In a similar setting Tatarchenko et al [58] predicted both object appearance and depth map from different viewpoints. Successive works [76], [42] trained a network to learn a symmetry-aware appearance flow, re-casting the remaining synthesis as a task of image completion; [56] extends this approach to the case in which N > 1 input viewpoints are available. However, all these works [69], [58], [76], [42], [56] assume the target view to be known at training time.…”
Section: Related Workmentioning
confidence: 99%
“…Successive works [76], [42] trained a network to learn a symmetry-aware appearance flow, re-casting the remaining synthesis as a task of image completion; [56] extends this approach to the case in which N > 1 input viewpoints are available. However, all these works [69], [58], [76], [42], [56] assume the target view to be known at training time. As this is not usually the case in the real-world, these approaches limit themselves to train solely on synthetic data and exhibit limited generalization in a real-world scenario.…”
Section: Related Workmentioning
confidence: 99%