2021
DOI: 10.48550/arxiv.2103.05606
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeX: Real-time View Synthesis with Neural Basis Expansion

Abstract: Neural basis functionsReflectance coefficients Figure 1: (a) Each pixel in NeX multiplane image consists of an alpha transparency value, base color k 0 , and view-dependent reflectance coefficients k 1 ...k n . A linear combination of these coefficients and basis functions learned from a neural network produces the final color value. (b, c) show our synthesized images that can be rendered in real time with view-dependent effects such as the reflection on the silver spoon.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Learning 3D scene representation with a parameterized neural network has been largely explored by recent works from various angles such as implicit signed distance function [21,5,34], occupancy [31,35,14], volume rendering (i.e. radiance field) [32,26,33,37,50,44], and shapes [2,15,14]. Such implicit neural representation also started to influence traditional 2D tasks such as image representation [23,40], super-resolution [9], and medical image analysis [49].…”
Section: Rendering With Spatial Encodingmentioning
confidence: 99%
See 3 more Smart Citations
“…Learning 3D scene representation with a parameterized neural network has been largely explored by recent works from various angles such as implicit signed distance function [21,5,34], occupancy [31,35,14], volume rendering (i.e. radiance field) [32,26,33,37,50,44], and shapes [2,15,14]. Such implicit neural representation also started to influence traditional 2D tasks such as image representation [23,40], super-resolution [9], and medical image analysis [49].…”
Section: Rendering With Spatial Encodingmentioning
confidence: 99%
“…They showed clear improvements on rendered images which were free from blurry details and structural distortion. Such technique has been used as a default setting in later 3D works such as [26,50,39,44], whose major focus were to reduce the rendering speed and improve output quality. The missing theory part on why spatial encoding tremendously boosts rendering quality has been partially analyzed in [36,8,40].…”
Section: Rendering With Spatial Encodingmentioning
confidence: 99%
See 2 more Smart Citations
“…3.3 for details). In the second category, we find recent approaches that enable efficient rendering [25,26,27] but where their continuous representation is implicit, and must be fitted via gradient descent for every new object or scene (typically taking days on commodity hardware). In this paper we present a simple yet powerful approach for novel view synthesis which explicitly encodes sources views into a volumetric representation that enables amortized rendering.…”
Section: Related Workmentioning
confidence: 99%