2019
DOI: 10.1111/cgf.13633
|View full text |Cite
|
Sign up to set email alerts
|

Neural BTF Compression and Interpolation

Abstract: The Bidirectional Texture Function (BTF) is a data‐driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
69
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 57 publications
(69 citation statements)
references
References 38 publications
(31 reference statements)
0
69
0
Order By: Relevance
“…Decoder Network The decoder (Figure 3) is also a fully connected network with non‐linear activations, following the same decoder design as Rainer et al [RJGW19]. It takes as input the latent coordinates of the ABRDF, along with the light and view directions in stereographic coordinates, which makes it practical for rendering.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Decoder Network The decoder (Figure 3) is also a fully connected network with non‐linear activations, following the same decoder design as Rainer et al [RJGW19]. It takes as input the latent coordinates of the ABRDF, along with the light and view directions in stereographic coordinates, which makes it practical for rendering.…”
Section: Methodsmentioning
confidence: 99%
“…The encoder only consists of a PreLU activation, one hidden layer with 128 neurons and a ReLU activation. The decoder consists of 4 hidden linear layers of 106 neurons with ReLU activations (same architecture as [RJGW19]). Whilst those parameters remain fixed, we explore several possibilities of latent space dimensionality in the following section.…”
Section: Methodsmentioning
confidence: 99%
“…There are also approaches that use neural networks to enhance specific building blocks of the classical rendering pipeline, e.g., shaders. Rainer et al [RJGW19] learn Bidirectional Texture Functions and Maximov et al [MLTFR19] learn Appearance Maps.…”
Section: Theoretical Fundamentalsmentioning
confidence: 99%
“…Masselus et al [2004] compare the errors of fitting the sampled reflectance function to various basis functions and conclude that multilevel B-Splines can preserve the most features. More recently, Rainer et al [2019] utilize neural networks to compress and interpolate sparsely sampled observations. However, these algorithms interpolate the reflectance function independently on each pixel and do not consider local information in neighboring pixels.…”
Section: Related Workmentioning
confidence: 99%