Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300646
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications

Abstract: Appearance-based gaze estimation methods that only require an off-the-shelf camera have significantly improved but they are still not yet widely used in the human-computer interaction (HCI) community. This is partly because it remains unclear how they perform compared to model-based approaches as well as dominant, special-purpose eye tracking equipment. To address this limitation, we evaluate the performance of state-of-the-art appearance-based gaze estimation for interaction scenarios with and without persona… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
54
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(65 citation statements)
references
References 81 publications
(95 reference statements)
2
54
0
Order By: Relevance
“…The functions E and D in our transforming encoderdecoder architecture can be implemented with any CNN architecture. We select the DenseNet CNN architecture [18] both for our architecture as well as for our reimplementation of state-of-the-art methods [26,55]. The latent codes z a , z g , and z h are defined to have dimensions (64), (3 × 2), and (3 × 16) respectively.…”
Section: Neural Network Configurationsmentioning
confidence: 99%
See 1 more Smart Citation
“…The functions E and D in our transforming encoderdecoder architecture can be implemented with any CNN architecture. We select the DenseNet CNN architecture [18] both for our architecture as well as for our reimplementation of state-of-the-art methods [26,55]. The latent codes z a , z g , and z h are defined to have dimensions (64), (3 × 2), and (3 × 16) respectively.…”
Section: Neural Network Configurationsmentioning
confidence: 99%
“…Ours vs Polynomial fit to PoR. In [55], Zhang et al fit a 3rd order polynomial function to correct initial pointof-regard (PoR) estimates from a person-independent gaze CNN. We train a DenseNet CNN for this purpose and intersect the predicted gaze ray (defined by gaze origin and direction in 3D with respect to the original camera) with the z = 0 plane to acquire an estimate for PoR.…”
Section: Comparison With State-of-the-artmentioning
confidence: 99%
“…Solutions to these issues can be both use-case and hardware dependent. One possible general approach to help overcome these limitations however is to augment our infrared-illumination-based hybrid approach with gaze-estimation techniques that use natural light [35,36,58]. In a dual-model system, situations when infrared eye features are difficult to localize the less accurate methods that use natural light can be used to improve the range of mobile eye-tracking-based applications.…”
Section: Discussionmentioning
confidence: 99%
“…The eye-gaze tracking technology has matured and become inexpensive. This technology can be built into computers, HMDs, and mobile devices [8]. The direction of the eye-gaze is a crucial and augmented input medium/control modality through which humans express socially relevant information, particularly the individual's cognitive state [7,20,21].…”
Section: Eye-gaze Trackingmentioning
confidence: 99%
“…Nowadays, eye tracking has matured and become an important research topic in computer vision and pattern recognition, because human gaze positions and movements are essential information for many applications ranging from diagnostic to interactive ones [3][4][5][6]. Eye tracking equipment, either worn on the body (head-mounted) [7] or strategically located in the environment [8], is a key requirement of gaze-based applications. In contrast, recent advances in the applications of head-mounted displays (HMDs) for virtual reality have driven the development of HMD-integrated eye-tracking technology.…”
Section: Introductionmentioning
confidence: 99%