2017
DOI: 10.1111/cogs.12484
|View full text |Cite
|
Sign up to set email alerts
|

What Am I Looking at? Interpreting Dynamic and Static Gaze Displays

Abstract: Displays of eye movements may convey information about cognitive processes but require interpretation. We investigated whether participants were able to interpret displays of their own or others' eye movements. In Experiments 1 and 2, participants observed an image under three different viewing instructions. Then they were shown static or dynamic gaze displays and had to judge whether it was their own or someone else's eye movements and what instruction was reflected. Participants were capable of recognizing t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
25
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(26 citation statements)
references
References 56 publications
(98 reference statements)
1
25
0
Order By: Relevance
“…Participants were less able to tell their own fixations from someone else's fixations on the same stimulus, but participants' discrimination accuracy was still modestly above chance level (approximately 55% correct against 50% chance). Similar results were found by Van Wermeskerken, Litchfield, and Van Gog ( 2017 ). They found that participants could only discriminate their own eye movements from someone else's eye movements when a dynamic gaze visualization (i.e., a movie of somebody's gaze locations on the stimulus) was used and not when a static visualization was used.…”
Section: Introductionsupporting
confidence: 90%
See 1 more Smart Citation
“…Participants were less able to tell their own fixations from someone else's fixations on the same stimulus, but participants' discrimination accuracy was still modestly above chance level (approximately 55% correct against 50% chance). Similar results were found by Van Wermeskerken, Litchfield, and Van Gog ( 2017 ). They found that participants could only discriminate their own eye movements from someone else's eye movements when a dynamic gaze visualization (i.e., a movie of somebody's gaze locations on the stimulus) was used and not when a static visualization was used.…”
Section: Introductionsupporting
confidence: 90%
“…Thus, this suggests that our instruction did not produce better recognition memory than the earlier experiment. Van Wermeskerken et al ( 2017 ) found similar results, showing no differences between participants who were informed that they would be asked to recognize their fixation locations and participants who were not informed about it.…”
Section: Discussionmentioning
confidence: 75%
“…Doing so is thought to establish a common reference between the viewer and the person behind the gaze marker, which is thought to be helpful for learning (e.g., Jarodzka et al, 2012;Jarodzka, van Gog, Dorr, Scheiter, & Gerjets, 2013). In other studies, the gaze marker is used to convey intentions or allow taking the perspective of another (Foulsham & Lock, 2015;Litchfield & Ball, 2011;Müller et al, 2013;van Wermeskerken, Litchfield, & van Gog, 2017;Velichkovsky, 1995) and thus requires substantial elaboration by the viewer to be used in the intended fashion. It is an open question whether interpretation of the visualized gaze positions as collaborative behavior underlies the collaboration benefits found in our study when making use of the shared gaze information, or if the visualized dwell locations are simply used as a spatial pointer that guides searchers as to where to search (cf.…”
Section: Implications Of Findingsmentioning
confidence: 99%
“…In the study by Van Wermeskerken, Litchfield, and van Gog (2018) participants observed the painting “The Unexpected Visitor” under three different instructions (i.e., estimate the ages of the people in the painting, remember the positions of the objects in the room, and estimate how long the unexpected visitor had been away from the family; cf. Yarbus, 1967).…”
Section: Introductionmentioning
confidence: 99%