2013
DOI: 10.1109/tpami.2012.101
|View full text |Cite
|
Sign up to set email alerts
|

Appearance-Based Gaze Estimation Using Visual Saliency

Abstract: We propose a gaze sensing method using visual saliency maps that does not need explicit personal calibration. Our goal is to create a gaze estimator using only the eye images captured from a person watching a video clip. Our method treats the saliency maps of the video frames as the probability distributions of the gaze points. We aggregate the saliency maps based on the similarity in eye images to efficiently identify the gaze points from the saliency maps. We establish a mapping between the eye images to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
103
0
1

Year Published

2016
2016
2019
2019

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 170 publications
(104 citation statements)
references
References 25 publications
0
103
0
1
Order By: Relevance
“…An object of interest is salient if it is rare or novel to the surroundings. A variety of applications can be benefited from saliency modeling, e.g., object detection [2][3], image segmentation [4] [5], image retargeting [6] [7], image/video compression [8] [9], visual tracking [10][11], gaze estimation [12], robot navigation [13], image/video quality assessment [14] [15], and advertising design [16].…”
Section: Introductionmentioning
confidence: 99%
“…An object of interest is salient if it is rare or novel to the surroundings. A variety of applications can be benefited from saliency modeling, e.g., object detection [2][3], image segmentation [4] [5], image retargeting [6] [7], image/video compression [8] [9], visual tracking [10][11], gaze estimation [12], robot navigation [13], image/video quality assessment [14] [15], and advertising design [16].…”
Section: Introductionmentioning
confidence: 99%
“…For each of the ten stimuli, the first corresponding web camera frame is taken as an input by the landmarker (Zhu and Ramanan 2012) to detect the eye corners. In Sugano et al (2013), the eye corners are detected using the OMRON OKAO vision library. To detect the eye corners for the subsequent frames, we apply template matching using the eye corners of the first frame (for each stimulus) as templates.…”
Section: Resultsmentioning
confidence: 99%
“…Clearly, the comparison with this method is not feasible as the authors use different equipment to reconstruct an accurate 3D eye model. Sugano et al (2013) adopt an appearance-based gaze estimator and use visual saliency for auto-calibration. The authors reported an error of 3.5 • .…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, while user-specific models are usually better, user adaptation often rely on a manual calibration requiring user cooperation or manual data processing [5]. Unsupervised methods based on visual saliency have been considered, but they are mainly restricted to screen gazing application [24].…”
Section: Introductionmentioning
confidence: 99%