2022
DOI: 10.1007/978-3-031-19784-0_37
|View full text |Cite
|
Sign up to set email alerts
|

Unified Implicit Neural Stylization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(3 citation statements)
references
References 63 publications
0
2
0
Order By: Relevance
“…Downblocks reduce patch size and increase the channel length, whereas, convolution block (CB) follows instant normalization and leaky ReLU. 23 Downblock pass information from front to end to eradicate overfitting,. The upblock reconstructs the location information by joining corresponding downblock outputs.…”
Section: Network Architecturementioning
confidence: 99%
“…Downblocks reduce patch size and increase the channel length, whereas, convolution block (CB) follows instant normalization and leaky ReLU. 23 Downblock pass information from front to end to eradicate overfitting,. The upblock reconstructs the location information by joining corresponding downblock outputs.…”
Section: Network Architecturementioning
confidence: 99%
“…Semantic-driven 3D scene editing is a much harder task compared with 2D photo editing because of the high demand for multi-view consistency, the scarcity of paired 3D data and its entangled geometry and appearance. Previous approaches either rely on laborious annotation (Kania et al 2022;Yang et al 2022), only support object deformation or translation (Tschernezki et al 2022;Kobayashi, Matsumoto, and Sitzmann 2022), or only perform global style transfer (Chen et al 2022;Chiang et al 2022;Fan et al 2022;Huang et al 2022) without strong semantic meaning. Recently, thanks to the development of score distillation sampling technique, text-guided editing has emerged as a promising direction with great potential.…”
Section: Related Workmentioning
confidence: 99%
“…In this material, we provide a detailed description. Different from previous methods that derive accurate and expensive ground-truth depth maps by pretrained NeRF trained with dense views [12] or captured by high-accuracy depth scanners [11], our goal is to develop a good NeRF with cheap or even free coarse depth maps, i.e., pre-trained single-view depth estimation models and consumer-level depth sensors. To this end, we use Microsoft Azure Kinect, ZED 2, and iPhone 13 Pro to collect a new dataset NVS-RGBD.…”
Section: B Detailed Descriptions Of the New Dataset Nvs-rgbdmentioning
confidence: 99%