2020
DOI: 10.1109/taffc.2017.2768026
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Technique to Develop Cognitive Models for Ambiguous Image Identification Using Eye Tracker

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 48 publications
0
6
0
Order By: Relevance
“…Vidyapu et al (2019) proposed an attention prediction on webpage images using multilabel classification. In a study by Roy et al (2017), the authors developed a method for the cognitive process to identify ambiguous images.…”
Section: Fixationmentioning
confidence: 99%
“…Vidyapu et al (2019) proposed an attention prediction on webpage images using multilabel classification. In a study by Roy et al (2017), the authors developed a method for the cognitive process to identify ambiguous images.…”
Section: Fixationmentioning
confidence: 99%
“…Thus, gaze input can support (Fares et al, 2013) or substitute (Templier et al, 2016) manual modes of interaction. According to Roy et al (2017), eye fixations can be used to develop a model to predict the objects that a user is observing, which is a hint to how that user is interpreting a scene. Saccadic eye movements can enable predicting cognitive load, mental fatigue, attention, emotion, anxiety according to work by Duchowski et al (2019).…”
Section: Eye Movementsmentioning
confidence: 99%
“…Thus, gaze input can support Fares et al (2013) or substitute Templier et al (2016) manual modes of interaction. According to Roy et al Roy et al (2017), eye fixations can be used to develop a model to predict the objects that a user is observing, which is a hint to how that user is interpreting a scene. Saccadic eye movements can enable predicting cognitive load, mental fatigue, attention, emotion, anxiety according to work by Duchowski et al Duchowski et al (2019).…”
Section: Eye Movementsmentioning
confidence: 99%