2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.284
|View full text |Cite
|
Sign up to set email alerts
|

It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation

Abstract: Eye gaze is an important non-verbal cue for human affect analysis. Recent gaze estimation work indicated that information from the full face region can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
375
2
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 340 publications
(384 citation statements)
references
References 32 publications
(80 reference statements)
5
375
2
2
Order By: Relevance
“…Dataset statistics. Joint distributions of the gaze yaw and pitch for TabletGaze [10], MPIIFaceGaze [31], iTracker [12] and our Gaze360 dataset. The Mollweide projection used to visualize the full unit sphere surface.…”
Section: Gaze360 Dataset Summarymentioning
confidence: 99%
See 1 more Smart Citation
“…Dataset statistics. Joint distributions of the gaze yaw and pitch for TabletGaze [10], MPIIFaceGaze [31], iTracker [12] and our Gaze360 dataset. The Mollweide projection used to visualize the full unit sphere surface.…”
Section: Gaze360 Dataset Summarymentioning
confidence: 99%
“…The gaze prediction output regresses the angle of the gaze relative to the camera view. In previous work, 3D gaze was predicted as a unit gaze vector [17,34] or as its spherical coordinates [23,31]. We use spherical coordinates which we believe to be more naturally interpretable in this context.…”
Section: Gaze360 Modelmentioning
confidence: 99%
“…We have taken entire face as input instead of only eyes. According to [42], gaze can be more accurately predicted when the entire face is considered. Our proposed network contains five Convolution layers.…”
Section: Proposed Network Architecturementioning
confidence: 99%
“…Most of these studies required special-purpose eye tracking equipment [2] that constrained users' mobility or manual data annotation [1] that prevented the study of natural attentive behaviour at scale. One way to overcome these limitations is to instead rely on the high-resolution front-facing cameras readily integrated into these devices in combination with computer vision for gaze estimation [5], [6], [7], [8]. However, despite significant progress in recent years, such methods are still too inaccurate to analyse attention in a fine-grained manner, e.g.…”
Section: Introductionmentioning
confidence: 99%