2020
DOI: 10.1145/3414685.3417779
|View full text |Cite
|
Sign up to set email alerts
|

MaterialGAN

Abstract: We address the problem of reconstructing spatially-varying BRDFs from a small set of image measurements. This is a fundamentally under-constrained problem, and previous work has relied on using various regularization priors or on capturing many images to produce plausible results. In this work, we present MaterialGAN , a deep generative convolutional network based on StyleGAN2, trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN can be used as a powerful mate… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
50
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(59 citation statements)
references
References 32 publications
1
50
0
Order By: Relevance
“…Deep material acquisition A popular approach to speed up acquisition of (sv)BRDFs is to learn a mapping from images to BRDFs, either supervised [RRF*16, GRR*17, DDB20, DAD*18, LCY*17], or with some level of self‐supervision and differentiable rendering in the mix [PHS20, GLD*19, GSH*20, HDMR21]. These methods produce parameters to classic BRDF models (and inherit their limitations), while we produce a NN that represents the BRDF itself.…”
Section: Previous Workmentioning
confidence: 99%
“…Deep material acquisition A popular approach to speed up acquisition of (sv)BRDFs is to learn a mapping from images to BRDFs, either supervised [RRF*16, GRR*17, DDB20, DAD*18, LCY*17], or with some level of self‐supervision and differentiable rendering in the mix [PHS20, GLD*19, GSH*20, HDMR21]. These methods produce parameters to classic BRDF models (and inherit their limitations), while we produce a NN that represents the BRDF itself.…”
Section: Previous Workmentioning
confidence: 99%
“…Such latent‐space optimization is not possible with methods based on the UNet convolutional neural network [TWPH20], as the skip connections transmit shape information while bypassing the encoder‐decoder bottleneck. Latent‐space optimization has proven to be a very effective regularization strategy for other ill‐posed inverse problems, such as material recovery from few photographs [GLD*19, GSH*20] and shape completion from partial point clouds [PFS*19].…”
Section: Related Workmentioning
confidence: 99%
“…There are a multitude of deep generative modeling paradigms [BTLLW22, CHIS22], all with different pros and cons: generative adversarial networks (GANs), variational autoencoders (VAEs), normalizing flows, auto‐regressive models, and diffusion models. These deep generative models have been applied across many visual domains such as natural images [KLA21], materials [GSH*20], sketches [VPB*22], scenes [WSCR18], voxels [WZX*16], meshes [NGEB20], implicit shapes [CZ19], and character motion [LZCvdP20]. As in the reconstruction context, many of these approaches have been designed to produce visual outputs directly, without any intermediate symbolic representation, and thus are outside the scope of what our survey covers.…”
Section: Background and Scopementioning
confidence: 99%