2021
DOI: 10.48550/arxiv.2112.01314
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeurSF: Neural Shading Field for Image Harmonization

Abstract: Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background. Due to a lack of understanding of the background illumination direction, existing works are incapable of generating a realistic foreground shading. In this paper, we decompose the image harmonization into two sub-problems: 1) illumination estimation of background images and 2) rendering of foreground objects. Before solving these two sub-problems, we first learn a direction-aware illumination … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 29 publications
(59 reference statements)
0
5
0
Order By: Relevance
“…Ma et al [26] proposed Neural Synthesis, a deep learning-based augmented reality rendering method that uses a convolutional neural network to synthesize the rendering layer of the background with the foreground, while simulating shadow and reflection effects to achieve overall harmony. From the perspective of light, Hu et al [27] divided the harmonization into two small tasks: lighting estimation of background images and rendering of foreground objects. Images are extracted using the Background Lighting Estimation Module and then used in conjunction with the Neural Rendering Framework to generate coordinated foreground images containing coordinated shadows.…”
Section: End-to-end Light Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Ma et al [26] proposed Neural Synthesis, a deep learning-based augmented reality rendering method that uses a convolutional neural network to synthesize the rendering layer of the background with the foreground, while simulating shadow and reflection effects to achieve overall harmony. From the perspective of light, Hu et al [27] divided the harmonization into two small tasks: lighting estimation of background images and rendering of foreground objects. Images are extracted using the Background Lighting Estimation Module and then used in conjunction with the Neural Rendering Framework to generate coordinated foreground images containing coordinated shadows.…”
Section: End-to-end Light Modelsmentioning
confidence: 99%
“…Gardner et al [17]; Geoffroy et al [18]; Hung et al [19]; [20]; Garon et al [21]; Gardner et al [22]; Nestmeyer et al [23]; Pandey et al [24]; Inoue et al [25]; Ma et al [26]; Hu et al [27] Light transfer model Light information is additionally extracted and transferred to different images.…”
Section: End-to-end Light Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Morph-UGATIT utilizes the adversarial loss L adv , the identity loss L idt , the cycle loss L cyc , and the CAM loss L CAM during the training phase. For CycleGAN and Morph-UGATIT, we add a structural similarity index metric (SSIM) loss 36 and a maximum mean discrepancy (MMD) loss 37 to enhance the constraints on the anatomical structure of the generated images. Furthermore, Morph-UGATIT also leverages an identity-preserving loss L pre to assure the consistency of local structures and general shape 35 between original and generated images.…”
Section: A Cyclegan and Morph-ugatitmentioning
confidence: 99%
“…Their method often fails to generate novel shading due to a lack of proper training data and learning to harmonize and perform intrinsic decomposition with one network. Hu et al [2021] develop a generated dataset and method to relight humans in outdoor scenes. Their method focuses on a specific use case and requires ground-truth geometry as input, making it difficult to use in the wild.…”
Section: Related Workmentioning
confidence: 99%