2022
DOI: 10.1111/cgf.14591
|View full text |Cite
|
Sign up to set email alerts
|

Controlling Material Appearance by Examples

Abstract: Figure 1: We propose a method to control and improve the appearance of an existing material (left) by transferring the appearance of materials in one or multiple target photo(s) (center) to the existing material. The augmented material (right) combines the coarse structure from the original material with the fine-scale appearance of the target(s) and preserve the input tileability. Our method can also transfer appearance from materials from different types by spatial control. This enables a simple workflow to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…Material acquisition has been an active research area for decades [GGG*16], and recent work has been focusing on lightweight acquisition, typically using deep learning for re‐construction. Different approaches were proposed to capture material from a single [DAD*18, GLT*21, LSC18, ZK21, MRR*22, VPS21], multiple [DAD*19, GLD*19, GSH*20, HHG*22] photographs or a video [YDPG21]. These methods focus on the recovery of parameters maps per pixel and do not allow for much post‐acquisition control.…”
Section: Related Workmentioning
confidence: 99%
“…Material acquisition has been an active research area for decades [GGG*16], and recent work has been focusing on lightweight acquisition, typically using deep learning for re‐construction. Different approaches were proposed to capture material from a single [DAD*18, GLT*21, LSC18, ZK21, MRR*22, VPS21], multiple [DAD*19, GLD*19, GSH*20, HHG*22] photographs or a video [YDPG21]. These methods focus on the recovery of parameters maps per pixel and do not allow for much post‐acquisition control.…”
Section: Related Workmentioning
confidence: 99%
“…There has also been multiple concurrent works that generate materials conditioned on text prompts. [Hu et al(2023)] utilise an extensive material dataset to generate high quality material node graphs using an autoregressive model conditioned on a CLIP text or image embedding. [Deschaintre et al(2023)] use a large dataset of fabric materials and descriptions to fine‐tune CLIP for material retrieval.…”
Section: Related Workmentioning
confidence: 99%
“…Henzler et al [2021] employ a convolutional neural network, conditioned on a latent code from a learned space, to convert a random noise field into a random non-repeating field of BRDFs that match the appearance of a flash-lit photograph of a stationary material. Inspired by MaterialGAN [Guo et al 2020b], and Hu et al [2022a] introduce tileable material GANs that allow for spatial control through an additional guidance image. While these networks can produce some stochastic variations around the expected value, they do not effectively sample the distribution conditioned on the input image.…”
Section: Related Workmentioning
confidence: 99%