2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00029
|View full text |Cite
|
Sign up to set email alerts
|

Stylizing 3D Scene via Implicit Representation and HyperNetwork

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
50
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(52 citation statements)
references
References 33 publications
1
50
0
Order By: Relevance
“…6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches. However, such approximation degrades the preservation of content details, as shown in our comparisons.…”
Section: Qualitative Resultsmentioning
confidence: 97%
See 2 more Smart Citations
“…6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches. However, such approximation degrades the preservation of content details, as shown in our comparisons.…”
Section: Qualitative Resultsmentioning
confidence: 97%
“…As is shown in framed yellow boxes, the fine-level shapes like the Figure 6. Qualitative comparisons to Chiang et al [5]. We compare the stylized results on Tanks and Temples dataset [24].…”
Section: Qualitative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Cao et al [4] stylize indoor scenes using a point cloud representation that cannot be directly used to texture a mesh. Other methods combine novel view synthesis and NST for consistent stylization from only a few input images [8,25,35]. In contrast, we do not require a network during inference to produce stylization results; our results can be rendered directly by a standard graphics pipeline.…”
Section: D Style Transfermentioning
confidence: 99%
“…This is difficult since style transfer losses are typically defined on 2D image features [21], so NST does not immediately generalize to 3D meshes. Recently, style transfer has been combined with novel view synthesis to stylize arbitrary scenes with a neural renderer from a sparse set of input images [8,25,35]. These model-based methods require a forward pass during inference and cannot directly be applied to meshes.…”
Section: Introductionmentioning
confidence: 99%