2019
DOI: 10.1109/tpami.2017.2778103
|View full text |Cite
|
Sign up to set email alerts
|

MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation

Abstract: Abstract-Learning-based methods are believed to work well for unconstrained gaze estimation, i.e. gaze estimation from a monocular RGB camera without assumptions regarding user, environment, or camera. However, current gaze datasets were collected under laboratory conditions and methods were not evaluated across multiple datasets. Our work makes three contributions towards addressing these limitations. First, we present the MPIIGaze dataset, which contains 213,659 full face images and corresponding ground-trut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
407
4

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 383 publications
(414 citation statements)
references
References 65 publications
3
407
4
Order By: Relevance
“…However, person-independent gaze errors are still insufficient for many applications [3,43,19,2]. While significant gains can be obtained by training person-specific models, it requires many thousands of training images per subject [58]. On the other hand, CNNs are prone to over-fitting if trained with very few (k ≤ 9) samples.…”
Section: Related Workmentioning
confidence: 99%
“…However, person-independent gaze errors are still insufficient for many applications [3,43,19,2]. While significant gains can be obtained by training person-specific models, it requires many thousands of training images per subject [58]. On the other hand, CNNs are prone to over-fitting if trained with very few (k ≤ 9) samples.…”
Section: Related Workmentioning
confidence: 99%
“…Even at 90 − 135 • head yaw a significant part of one eyeball is often still visible and informative for gaze estimation (see Supplemental). Existing methods [12,32] do not deal with these cases and typically assume that the subject is facing the camera. However, such models do not generalize well to challenging applications such as in robotics or surveillance.…”
Section: Related Workmentioning
confidence: 99%
“…Generic gaze estimator. As our gaze estimator, we use GazeNet [45]. It is is based on a vgg16 architecture.…”
Section: Experimental Settingmentioning
confidence: 99%