2018
DOI: 10.1016/j.knosys.2017.10.010
|View full text |Cite
|
Sign up to set email alerts
|

Learning a gaze estimator with neighbor selection from large-scale synthetic eye images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2
1

Relationship

4
5

Authors

Journals

citations
Cited by 26 publications
(18 citation statements)
references
References 11 publications
0
18
0
Order By: Relevance
“…Implementation Details: In order to verify the effectiveness of the proposed method for gaze estimation, 3 public datasets (UTView [4], SynthesEyes [5], UnityEyes [9]) are used to train the estimator with k-NN [8], MPIIGaze dataset [10] and purified MPIIGaze dataset (purified by proposed method) are used for test the accuracy. The eye gaze estimation network is similar to [29][31], the input is a 35 × 55 gray scale image that is passed through 5 convolutional layers followed by 3 fully connected layers, the last one encoding the 3-dimensional gaze vector: (1)Conv 32@3×3 (2)Conv 32@3×3 (3)Conv 64@3×3 (4)Max-Pooling 3×3 (5)Conv 80@3×3 (6)Conv 192@3×3 (7)Max-Pooling 2×2 (8)FC9600 ( 9)FC1000 ( 10)FC3 (11)Euclidean loss.…”
Section: Appearance-based Gaze Estimationmentioning
confidence: 99%
“…Implementation Details: In order to verify the effectiveness of the proposed method for gaze estimation, 3 public datasets (UTView [4], SynthesEyes [5], UnityEyes [9]) are used to train the estimator with k-NN [8], MPIIGaze dataset [10] and purified MPIIGaze dataset (purified by proposed method) are used for test the accuracy. The eye gaze estimation network is similar to [29][31], the input is a 35 × 55 gray scale image that is passed through 5 convolutional layers followed by 3 fully connected layers, the last one encoding the 3-dimensional gaze vector: (1)Conv 32@3×3 (2)Conv 32@3×3 (3)Conv 64@3×3 (4)Max-Pooling 3×3 (5)Conv 80@3×3 (6)Conv 192@3×3 (7)Max-Pooling 2×2 (8)FC9600 ( 9)FC1000 ( 10)FC3 (11)Euclidean loss.…”
Section: Appearance-based Gaze Estimationmentioning
confidence: 99%
“…As shown in Figure 7, the proposed neighbor selection method consists of a double k -NN query in different feature spaces. This work provides a simple version of our previous work [37]. Here, Raw features are used as the appearance descriptor.…”
Section: Proposed Systemmentioning
confidence: 99%
“…In addition to common methods such as Support Vector Regression(SVR), Adaptive Linear Regression(ALR) and Random Forest(RF), two methods are reproduced for fairly comparison with stateof-the-art. First method is a simple cascaded method [9][37] [35] which uses multiple k-NN(k-Nearest Neighbor) classifier to select neighbors in feature space joint head pose,pupil center and eye appearance. The other method is to train a simple convolutional neural network (CNN) [12][34] [36] to predict the eye gaze direction with l 2 loss.…”
Section: Appearance-based Gaze Estimationmentioning
confidence: 99%