2021
DOI: 10.1145/3450626.3459872
|View full text |Cite
|
Sign up to set email alerts
|

Total relighting

Abstract: We propose a novel system for portrait relighting and background replacement, which maintains high-frequency boundary details and accurately synthesizes the subject's appearance as lit by novel illumination, thereby producing realistic composite images for any desired scene. Our technique includes foreground estimation via alpha matting, relighting, and compositing. We demonstrate that each of these stages can be tackled in a sequential pipeline without the use of priors (e.g. known background or known illumin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 83 publications
(4 citation statements)
references
References 65 publications
(81 reference statements)
0
4
0
Order By: Relevance
“…We show that this design does not reproduce well the intricate high‐frequency view‐dependent effects of our setting. Volux‐GAN [TFM*22] extends the lighting to arbitrary environment maps, but requires pseudo ground truth for supervision, which in turn needs to be obtained from costly light stage data [POEL*21]. Similar to our approach, MesoGAN [DNR*23] uses generative reflectance fields, but only considers 3D texture shells and requires synthetic training data.…”
Section: Related Workmentioning
confidence: 99%
“…We show that this design does not reproduce well the intricate high‐frequency view‐dependent effects of our setting. Volux‐GAN [TFM*22] extends the lighting to arbitrary environment maps, but requires pseudo ground truth for supervision, which in turn needs to be obtained from costly light stage data [POEL*21]. Similar to our approach, MesoGAN [DNR*23] uses generative reflectance fields, but only considers 3D texture shells and requires synthetic training data.…”
Section: Related Workmentioning
confidence: 99%
“…As the shell‐volume is constrained to the close proximity of the face, our method is incapable of reconstructing the background. We segment out the background in the training frame using an off‐the‐shelf face segmentation method [PEL*21].…”
Section: Model Trainingmentioning
confidence: 99%
“…Commonly used priors include piece‐wise constant albedos [CZL18,LS18a,LS18b,MCZ*18,LBP*12], or sparsity of extracted albedo values [MSZ*21, GMLMG12]. A few works exploit data‐driven priors instead of hand‐crafted priors [BBS14, ZKE15, SGK*19, LSR*20, PEL*21, YTL20], which can be subject to domain discrepancy. IBL‐NeRF takes inspiration from the aforementioned prior works using single images, and adds constraints in the image space.…”
Section: Related Workmentioning
confidence: 99%