2022
DOI: 10.48550/arxiv.2208.07059
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 38 publications
0
11
0
Order By: Relevance
“…It is worth noting that this is indeed a common problem existing in all 2D PST methods when being simply applied to 3D scene. Moreover, the stylized results of UPST [7] show some disparity with the reference color style in vision. In contrast, our approach manages to integrate the advantages of radiance field with MKL.…”
Section: Qualitative Resultsmentioning
confidence: 98%
See 3 more Smart Citations
“…It is worth noting that this is indeed a common problem existing in all 2D PST methods when being simply applied to 3D scene. Moreover, the stylized results of UPST [7] show some disparity with the reference color style in vision. In contrast, our approach manages to integrate the advantages of radiance field with MKL.…”
Section: Qualitative Resultsmentioning
confidence: 98%
“…WCT 2 [59] performs better than CCPL in structure preservation due to its wavelet module, but the inevitable noises and the intense edges (e.g., the bone boundary of trex) also cause disharmony. By training a high-resolution 2D PST network, UPST [7] is able to preserve the structure well. However, the color style of stylized scene is different from the reference.…”
Section: Qualitative Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Semantic-driven editing approaches, such as strokebased scene editing [36,41,70], text-driven image synthesis and editing [1,53,56], and attribute-based face editing [28,64], have greatly improved the ease of artistic creation. However, despite the great success of 2D image edit-ing and neural rendering techniques [14,44], similar editing abilities in the 3D area are still limited: (1) they require laborious annotation such as image masks [28,75] and mesh vertices [73,78] to achieve the desired manipulation; (2) they conduct global style transfer [12,13,16,21,79] while ignoring the semantic meaning of each object part (e.g., windows and tires of a vehicle should be textured differently); (3) they can edit on categories by learning a textured 3D latent representation (e.g., 3D-aware GANs with faces and cars etc.) [6,8,9,18,48,60,63,64], or at a coarse level [37,68] with basic color assignment or objectlevel disentanglement [32], but struggle to conduct texture editing on objects with photo-realistic textures or out-ofdistribution characteristics.…”
Section: Introductionmentioning
confidence: 99%