2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00749
|View full text |Cite
|
Sign up to set email alerts
|

SynSin: End-to-End View Synthesis From a Single Image

Abstract: Figure 1: End-to-end view synthesis. Given a single RGB image (red), SynSin generates images of the scene at new viewpoints (blue). SynSin predicts a 3D point cloud, which is projected onto new views using our differentiable renderer; the rendered point cloud is passed to a GAN to synthesise the output image. SynSin is trained end-to-end, without 3D supervision.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
267
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 310 publications
(286 citation statements)
references
References 59 publications
2
267
0
Order By: Relevance
“…A variety of supervisory signals have been proposed to learn such priors. Besides using 3D ground truth directly, authors have considered using videos [36], [45], [46], [47], [48], stereo pairs [38], [49] and multi-view images [50], [51], [52], [53], [54].…”
Section: Category-specific Reconstructionmentioning
confidence: 99%
“…A variety of supervisory signals have been proposed to learn such priors. Besides using 3D ground truth directly, authors have considered using videos [36], [45], [46], [47], [48], stereo pairs [38], [49] and multi-view images [50], [51], [52], [53], [54].…”
Section: Category-specific Reconstructionmentioning
confidence: 99%
“…This research topic can be applied to various problems such as smoothing the transition between pairs of images, filling up missing regions, etc. Most methods make use of a 3D representation of the scene to synthesize novel views [7,8]. Inspired by [7,22], we adopt their 3D point cloud renderer in our architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Most methods make use of a 3D representation of the scene to synthesize novel views [7,8]. Inspired by [7,22], we adopt their 3D point cloud renderer in our architecture. Our implementation is different in that we assume that camera poses are unknown and use a network for future camera pose estimation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations