2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.750
|View full text |Cite
|
Sign up to set email alerts
|

Deep View Morphing

Abstract: Recently, convolutional neural networks (CNN) have been successfully applied to view synthesis problems. However, such CNN-based methods can suffer from lack of texture details, shape distortions, or high computational complexity. In this paper, we propose a novel CNN architecture for view synthesis called "Deep View Morphing" that does not suffer from these issues. To synthesize a middle view of two input images, a rectification network first rectifies the two input images. An encoder-decoder network then gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(58 citation statements)
references
References 36 publications
0
58
0
Order By: Relevance
“…Novel view synthesis is typically solved using image based rendering techniques [Kang et al 2006], with recent approaches allowing for high-quality view synthesis results [Chaurasia et al 2013[Chaurasia et al , 2011Hedman et al 2017;Hedman and Kopf 2018;Penner and Zhang 2017]. With the emergence of deep neural networks, learning-based techniques have become an increasingly popular tool for novel view synthesis [Flynn et al 2016;Ji et al 2017;Kalantari et al 2016;Meshry et al 2019;Mildenhall et al 2019;Sitzmann et al 2019;Thies et al , 2018Zhou et al 2018]. To enable high-quality synthesis results, existing methods typically require multiple input views [Kang et al 2006;Penner and Zhang 2017].…”
Section: Related Work 21 Novel View Synthesismentioning
confidence: 99%
“…Novel view synthesis is typically solved using image based rendering techniques [Kang et al 2006], with recent approaches allowing for high-quality view synthesis results [Chaurasia et al 2013[Chaurasia et al , 2011Hedman et al 2017;Hedman and Kopf 2018;Penner and Zhang 2017]. With the emergence of deep neural networks, learning-based techniques have become an increasingly popular tool for novel view synthesis [Flynn et al 2016;Ji et al 2017;Kalantari et al 2016;Meshry et al 2019;Mildenhall et al 2019;Sitzmann et al 2019;Thies et al , 2018Zhou et al 2018]. To enable high-quality synthesis results, existing methods typically require multiple input views [Kang et al 2006;Penner and Zhang 2017].…”
Section: Related Work 21 Novel View Synthesismentioning
confidence: 99%
“…With a huge amount of multiview images, 3D stereo algorithms [5] are applicable to reconstruct the 3D scene and then be utilized to synthesize novel views. Ji et al [11] proposed to synthesize middle view images by using two rectified view images. Yan et al [33] proposed a perspective transformer network to learn the projection transformation after reconstructing the 3D volume of the object.…”
Section: Related Workmentioning
confidence: 99%
“…View synthesis is a long-standing problem in computer vision [5,11,25,33,35], which facilitates many applications including surrounding perception and virtual reality. In modern autonomous driving solution, the limited viewpoint of on-car cameras restricts the system from reliably understanding the environment, acquiring accurate global view for better policy making and path planning.…”
Section: Introductionmentioning
confidence: 99%
“…More recent work [Ji et al 2017;] employ some notion of 3D geometry in the end-to-end process to deal with the 2D-3D object mapping. For instance, use an explicit flow that maps pixels from the input image to the output novel view.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, use an explicit flow that maps pixels from the input image to the output novel view. In Deep View Morphing [Ji et al 2017] two input images and an explicit rectification stage, that roughly aligns the inputs, are used to generate intermediate views. split the problem between visible pixels, i.e.…”
Section: Related Workmentioning
confidence: 99%