2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01218
|View full text |Cite
|
Sign up to set email alerts
|

Generalizing Eye Tracking With Bayesian Adversarial Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
54
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 86 publications
(61 citation statements)
references
References 36 publications
0
54
0
Order By: Relevance
“…The ultimate solution is to collect training data that covers the whole data space, however, it is practically impossible. Several studies have attempted to extract subjectinvariant features from eye images [38], [51], [40]. Park et al [38] convert the original eye images into a unified gaze representation, which is a pictorial representation of the eyeball, the iris and the pupil.…”
Section: A Deep Feature From Appearancementioning
confidence: 99%
“…The ultimate solution is to collect training data that covers the whole data space, however, it is practically impossible. Several studies have attempted to extract subjectinvariant features from eye images [38], [51], [40]. Park et al [38] convert the original eye images into a unified gaze representation, which is a pictorial representation of the eyeball, the iris and the pupil.…”
Section: A Deep Feature From Appearancementioning
confidence: 99%
“…Recent work has shown that appearancebased methods can offer good generalization capability even without user-specific data [51]. Thanks to the emerging large-scale data sets and deep learning techniques, the performance of learning-based eye-tracking models has been steadily improving [22,43,47]. Our system unifies eye tracking and video analysis to track human visual attention, allowing video analysis to focus on salient visual content, thereby reducing computation and energy costs.…”
Section: Eye Trackingmentioning
confidence: 99%
“…Early solutions for gaze-on-screen tracking suffer from low tracking accuracy due to inherent technology limitations and the lack of sufficient training data [5,6]. Recently introduced large-scale data sets, e.g., GazeCapture [7], have greatly enhanced the accuracy of learning-based gaze-onscreen models [8,9,10,11], which capture human face related features, determine the relative mapping between the face and screen coordinate systems, and infer human gaze fixation position on the screen. However, adopting learning-based methods to mobile devices still faces serious performance challenges.…”
Section: Introductionmentioning
confidence: 99%