“…6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches. However, such approximation degrades the preservation of content details, as shown in our comparisons.…”
Section: Qualitative Resultsmentioning
confidence: 97%
“…As is shown in framed yellow boxes, the fine-level shapes like the Figure 6. Qualitative comparisons to Chiang et al [5]. We compare the stylized results on Tanks and Temples dataset [24].…”
Section: Qualitative Resultsmentioning
confidence: 99%
“…In Fig. 6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches.…”
Figure 1. Results of consistent 3D stylization by our method. Given a set of real photographs (a) and a style image (b), our model is capable of generating stylized novel views (c), which are consistent in 3D space by learning a stylized NeRF.
“…6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches. However, such approximation degrades the preservation of content details, as shown in our comparisons.…”
Section: Qualitative Resultsmentioning
confidence: 97%
“…As is shown in framed yellow boxes, the fine-level shapes like the Figure 6. Qualitative comparisons to Chiang et al [5]. We compare the stylized results on Tanks and Temples dataset [24].…”
Section: Qualitative Resultsmentioning
confidence: 99%
“…In Fig. 6, we compare our results with [5], a pioneering work introducing NeRF into stylization. [5] calculates the style and content losses on small sub-sampled patches which approximate large patches.…”
Figure 1. Results of consistent 3D stylization by our method. Given a set of real photographs (a) and a style image (b), our model is capable of generating stylized novel views (c), which are consistent in 3D space by learning a stylized NeRF.
“…Cao et al [4] stylize indoor scenes using a point cloud representation that cannot be directly used to texture a mesh. Other methods combine novel view synthesis and NST for consistent stylization from only a few input images [8,25,35]. In contrast, we do not require a network during inference to produce stylization results; our results can be rendered directly by a standard graphics pipeline.…”
Section: D Style Transfermentioning
confidence: 99%
“…This is difficult since style transfer losses are typically defined on 2D image features [21], so NST does not immediately generalize to 3D meshes. Recently, style transfer has been combined with novel view synthesis to stylize arbitrary scenes with a neural renderer from a sparse set of input images [8,25,35]. These model-based methods require a forward pass during inference and cannot directly be applied to meshes.…”
Figure 1. We perform style transfer on reconstructed 3D meshes by synthesizing stylized textures. We compute style transfer losses on views of the scene and backpropagate gradients to the texture. Depth and surface normal data from the mesh enable 3D-aware stylization, preventing artifacts that arise from standard 2D losses. Our stylized meshes can be rendered using the traditional graphics pipeline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.