2021
DOI: 10.1109/tbme.2020.2990734
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Networks and Transfer Learning for Measuring Cognitive Impairment Using Eye-Tracking in a Distributed Tablet-Based Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(43 citation statements)
references
References 15 publications
1
31
0
Order By: Relevance
“…The details of the Visuospatial Memory Eyetracking Test was described in [24,25], a passive viewing test that asks participants to enjoy the images displayed on the screen, where the participants were not asked to perform any memorizing task and did not get scores or any kind of feedback during the test. In short, the task first shows 20 images of scenes consisting of two to five objects for a duration of five seconds, then displays a modified set of images with either one object added or removed from each image.…”
Section: Data Collectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The details of the Visuospatial Memory Eyetracking Test was described in [24,25], a passive viewing test that asks participants to enjoy the images displayed on the screen, where the participants were not asked to perform any memorizing task and did not get scores or any kind of feedback during the test. In short, the task first shows 20 images of scenes consisting of two to five objects for a duration of five seconds, then displays a modified set of images with either one object added or removed from each image.…”
Section: Data Collectionmentioning
confidence: 99%
“…The memory test was administered to the participants using the same protocol described in [25]. Briefly, the test was presented on an iPad Air 9.7" tablet with maximum screen brightness and mounted on a stand in portrait orientation during the test.…”
Section: Plos Onementioning
confidence: 99%
See 1 more Smart Citation
“…The features of the sources and target areas are mapped into the meta-feature space. As displayed in [19,20], the procedure begins with source annotated data from which the action formats are recovered. The activity layouts show that timestamp, location, and context.…”
Section: Related Workmentioning
confidence: 99%
“…Signal classification is the basis for identifying the diverse movement states of eyeballs. Various algorithms have emerged in the past decade, including the Hidden Markov Model (HMM) [ 30 , 31 , 32 , 33 ], transfer learning [ 34 , 35 , 36 , 37 ], and linear classifiers [ 38 , 39 , 40 , 41 ]. A hierarchical HMM statistical algorithm [ 33 ] was utilized to classify ternary eye movements and the classification result of fixations, saccades and smooth pursuits was evaluated as “good”.…”
Section: Introductionmentioning
confidence: 99%