2018
DOI: 10.1111/cgf.13341
|View full text |Cite
|
Sign up to set email alerts
|

From Faces to Outdoor Light Probes

Abstract: Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment. Our insight is to use a person's face as an outdoor light pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(47 citation statements)
references
References 44 publications
0
47
0
Order By: Relevance
“…Concurrent to this work, Zhang et al [31] extend [12] with a more flexible parametric sky model. In another closely-related paper, Calian et al [2] estimate HDR outdoor lighting from a single face image. While they employ a similar deep autoencoder to learn a data-driven model, they rely on a multi-step non-linear optimization approach over the space of face albedo and sky parameters, which is time-consuming and prone to local minima.…”
Section: Ground Truthmentioning
confidence: 99%
See 1 more Smart Citation
“…Concurrent to this work, Zhang et al [31] extend [12] with a more flexible parametric sky model. In another closely-related paper, Calian et al [2] estimate HDR outdoor lighting from a single face image. While they employ a similar deep autoencoder to learn a data-driven model, they rely on a multi-step non-linear optimization approach over the space of face albedo and sky parameters, which is time-consuming and prone to local minima.…”
Section: Ground Truthmentioning
confidence: 99%
“…To learn our sky model, we adopt an autoencoder architecture which projects a full HDR sky down to 64 parameters (encoder), and subsequently reconstructs it (decoder). This is conceptually similar to [2], with the key differences that we employ a more robust training scheme which includes occlusions and radiometric distortions, making it amenable to full end-to-end learning rather than the non-linear inverse rendering framework of [2]. In addition, we employ a different architecture based on residual layers [11] (see fig.…”
Section: Deep Autoencodermentioning
confidence: 99%
“…Debevec et al [3] first showed that photographs of a mirrored sphere with different exposures can be used to compute the illumination at the sphere's location. Subsequent works show that beyond mirrored spheres, it is also possible to capture illumination using hybrid spheres [4], known 3D objects [24], object's with know surface material [8], or even human faces [1] as proxies for light probes.…”
Section: Related Workmentioning
confidence: 99%
“…This is closely related to Zhang et al [26] who learn to map LDR panoramas to HDR environment maps via an encoder-decoder network. Similarly, Calian et al [2] (as well as the concurrent work of Hold-Geoffroy et al [8]) employ a deep autoencoder to learn a data-driven illumination model. They use this learned model to estimate lighting from a face image via a multi-step non-linear optimization approach over the space of face albedo and sky parameters, that is time-consuming and prone to local minima.…”
Section: Related Workmentioning
confidence: 99%
“…Cheng et al [3] estimate lighting from the front and back camera of a mobile phone. However, they represent lighting using low-frequency spherical harmonics (SH), which, as demonstrated in [2], does not appropriately model outdoor lighting.…”
Section: Related Workmentioning
confidence: 99%