2019
DOI: 10.1016/j.jneumeth.2019.05.016
|View full text |Cite
|
Sign up to set email alerts
|

DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
73
0
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 131 publications
(76 citation statements)
references
References 21 publications
0
73
0
3
Order By: Relevance
“…Use pupil size and point-of-gaze for predicting the users' behaviours (e.g., word searching, question answering, looking for the most interesting title in a list) [28] Naïve Bayes classifier Use of fixation duration, mean, and standard deviation to identify various visual activities (e.g., reading, scene search) [29] MLP Use of pupil dilation, gaze dispersion to classify various tasks on decision making [30] Decision tree, MLP, support vector machines (SVM), linear regression Use of fixation rate, fixation duration, fixations per trial, saccade amplitude, and relative saccade angles to identify eye movements to predict visualisation tasks In addition to conventional methods, existing works also utilise the deep learning (DL) approaches for the pupil detection while using hierarchical image patterns to enhance and eliminate artefacts with Convolutional Neural Networks (CNNs). For instance, [21] proposed the use of fully connected CNNs for segmentation of the entire pupil area in which they trained the network on 3946 video oscillography images. These images were hand annotated and generated within a laboratory environment.…”
Section: Referencementioning
confidence: 99%
See 2 more Smart Citations
“…Use pupil size and point-of-gaze for predicting the users' behaviours (e.g., word searching, question answering, looking for the most interesting title in a list) [28] Naïve Bayes classifier Use of fixation duration, mean, and standard deviation to identify various visual activities (e.g., reading, scene search) [29] MLP Use of pupil dilation, gaze dispersion to classify various tasks on decision making [30] Decision tree, MLP, support vector machines (SVM), linear regression Use of fixation rate, fixation duration, fixations per trial, saccade amplitude, and relative saccade angles to identify eye movements to predict visualisation tasks In addition to conventional methods, existing works also utilise the deep learning (DL) approaches for the pupil detection while using hierarchical image patterns to enhance and eliminate artefacts with Convolutional Neural Networks (CNNs). For instance, [21] proposed the use of fully connected CNNs for segmentation of the entire pupil area in which they trained the network on 3946 video oscillography images. These images were hand annotated and generated within a laboratory environment.…”
Section: Referencementioning
confidence: 99%
“…Despite the variety of existing methods for the pupil localisation, further improvements are required in terms of a precise estimation for the pupil location. For instance, the DL-based pupil localisation and gaze estimation in [21] uses pixel distance to validate the performance, which is not a standard representation of the error in the case of varying resolutions. Furthermore, the validation is performed on a dataset containing artificially rendered images, which in most cases do not reflect the real-time dynamics.…”
Section: Referencementioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, an integrated video documentation and analysis system will analyse eye movements and gait abnormalities by the means of pattern recognition (e.g. neuronal networks, deep learning algorithms) [20][21][22]. It will be used to objectify the diagnosis and classify patient's self-recorded symptoms.…”
Section: Interventionmentioning
confidence: 99%
“…Other eye trackers which are based on high speed video cameras are more expensive than the infrared based eye trackers, but are also more accurate than webcam based eye trackers (Agarwal et al, 2019a). In such eye trackers, the measurement is done based on deep learning and computer vision applications (Kato et al, 2019), (Yiu et al, 2019).…”
Section: Introductionmentioning
confidence: 99%