2022
DOI: 10.1145/3528223.3530130
|View full text |Cite
|
Sign up to set email alerts
|

EyeNeRF

Abstract: A unique challenge in creating high-quality animatable and relightable 3D avatars of real people is modeling human eyes, particularly in conjunction with the surrounding periocular face region. The challenge of synthesizing eyes is multifold as it requires 1) appropriate representations for the various components of the eye and the periocular region for coherent viewpoint synthesis, capable of representing diffuse, refractive and highly reflective surfaces, 2) disentangling skin and eye appearance from environ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 46 publications
0
13
0
Order By: Relevance
“…We complete the full scene model by combining our novel model for the periocular region with an eyeball model from EyeNeRF [LMM*22]. This parametric eyeball model consists of two spheres smoothly blended into each other as defined by the cornea radius, eyeball radius, and the blending angle.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We complete the full scene model by combining our novel model for the periocular region with an eyeball model from EyeNeRF [LMM*22]. This parametric eyeball model consists of two spheres smoothly blended into each other as defined by the cornea radius, eyeball radius, and the blending angle.…”
Section: Methodsmentioning
confidence: 99%
“…3D morphable models have been crucial to automated tracking and animation of eyeballs and periocular region [WBM*16], with special emphasis on tracking eyelid contours [WXLY17]. The periocular skin region exhibits complex deformations; data‐driven techniques from multi‐view captures have successfully achieved accurate reconstruction using either mesh [KADM22] or volumetric representations [LMM*22], with limited capacity to animate by interpolating between captured training gazes/expressions. [SWW*20, CSK*22] generate plausible gaze and periocular animations by conditioning their underlying face mesh and implicit volume respectively on the gaze direction for egocentric cameras in VR applications.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…While their approach enables efficient relighting for real-time animation using a teacher-student framework, we observe that the bottleneck lighting encoding without visibility information in their student model leads to severe overfitting when applied to hand relighting. EyeNeRF [24] enables the joint learning of geometry and relightable appearance of a moving eyeball model. Compared to eyes, hands exhibit significantly more diverse pose variations, making explicit visibility incorporation essential.…”
Section: Model-based Human Relightingmentioning
confidence: 99%
“…Lemley et al [ 29 ] further improved the efficiency of gaze estimation with the simplified convolutional neural network on low-quality devices. EyeNeRF provides an efficient method for generating large eye datasets, which may benefit gaze estimation [ 30 ]. Recently, a novel adaptive feature fusion network AFF-Net was proposed [ 4 ].…”
Section: Introductionmentioning
confidence: 99%