2021
DOI: 10.48550/arxiv.2112.03517
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CG-NeRF: Conditional Generative Neural Radiance Fields

Abstract: the proposed method maintains consistent image quality on various condition types and achieves superior fidelity and diversity compared to existing NeRF-based generative models.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…In order to address this issue, many works have employed the method of embedding explicit control into the generation process. For example, CG-NeRF [Jo et al 2021] introduces various soft conditions, including sketches as input. FENeRF involves semantic masks in the generation process as output.…”
Section: D-aware Neural Face Image Synthesismentioning
confidence: 99%
“…In order to address this issue, many works have employed the method of embedding explicit control into the generation process. For example, CG-NeRF [Jo et al 2021] introduces various soft conditions, including sketches as input. FENeRF involves semantic masks in the generation process as output.…”
Section: D-aware Neural Face Image Synthesismentioning
confidence: 99%
“…CG-NeRF is capable of producing 3D-aware output images that are faithful to the corresponding condition inputs. The image is from [184].…”
Section: Generative Nerfmentioning
confidence: 99%
“…Based on these prior explorations of unconditional generative NeRF, building generative NeRF conditioned on certain guidance has attracted increasing interest. Recently, Jo et al [184] propose a conditional generative neural radiance fields (CG-NeRF), which can generate multi-view images reflecting extra input conditions (e.g., images or texts) as shown in Fig. 15.…”
Section: Generative Nerfmentioning
confidence: 99%
See 1 more Smart Citation
“…People address these scaling issues of NeRF-based GANs in different ways, but the dominating approach is to train a separate 2D decoder to produce a high-resolution image from a low-resolution image or feature grid rendered from a NeRF backbone [43]. During the past six months, there appeared more than a dozen of methods which follow this paradigm (e.g., [6,15,71,47,79,35,75,23,72,78,64]). While using the upsampler allows to scale the model to high resolution, it comes with two severe limitations: 1) it breaks multi-view consistency of a generated object, i.e.…”
Section: Introductionmentioning
confidence: 99%