2017
DOI: 10.3389/fpsyg.2017.00715
|View full text |Cite
|
Sign up to set email alerts
|

Dissociating Attention and Eye Movements in a Quantitative Analysis of Attention Allocation

Abstract: In a recent paper, we introduced a method and equation for inferring the allocation of attention on a continuous scale. The size of the stimuli, the estimated size of the fovea, and the pattern of results implied that the subjects' responses reflected shifts in covert attention rather than shifts in eye movements. This report describes an experiment that tests this implication. We measured eye movements. The monitor briefly displayed (e.g., 130 ms) two small stimuli (≈1.0 • × 1.2 • ), situated one atop another… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 18 publications
1
4
0
Order By: Relevance
“…A fixed probability determined which one was the target (as in probability matching studies), and, accordingly, the participant’s task was to report on the contents of this stimulus. Although the two stimuli projected an image that did not exceed the dimensions of the fovea, (e.g., Wandell, 1995), which we confirmed in an eye-tracking study, (Heyman et al, 2017), the stimulus duration times were tailored to each participant so that he or she could report correctly on one stimulus, but not both. That is, the participants “saw” both stimuli but were able to identify the contents of just one.…”
Section: A Procedures For Quantifying the Allocation Of Covert Attentionsupporting
confidence: 55%
See 2 more Smart Citations
“…A fixed probability determined which one was the target (as in probability matching studies), and, accordingly, the participant’s task was to report on the contents of this stimulus. Although the two stimuli projected an image that did not exceed the dimensions of the fovea, (e.g., Wandell, 1995), which we confirmed in an eye-tracking study, (Heyman et al, 2017), the stimulus duration times were tailored to each participant so that he or she could report correctly on one stimulus, but not both. That is, the participants “saw” both stimuli but were able to identify the contents of just one.…”
Section: A Procedures For Quantifying the Allocation Of Covert Attentionsupporting
confidence: 55%
“…The readings suggested parallels between attention allocation and choice (e.g., Corbetta, Miezin, Dobmeyer, Shulman, & Petersen, 1990;Sperling, 1960). To test this idea, we developed a mathematical model and experimental procedure for calculating the allocation of covert visual attention; eye movements played no role (Heyman et al, 2016(Heyman et al, , 2017. The results were systematic, but This document is copyrighted by the American Psychological Association or one of its allied publishers.…”
Section: Contextmentioning
confidence: 99%
See 1 more Smart Citation
“…The visual world is experienced by means of eye fixations (Borys & Plechawska-Wójcik, 2017;Dolgünsoz, 2015;Heyman et al, 2017;Schneider, 2018). Fixations are essentially when our eyes stop scanning a scene, enabling us to extract detailed information from visual surroundings (Tobii Pro, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…In human vision, a distinction is sometimes made between overt and covert attention, where overt attention refers to the location that is behaviorally prominent a given point in time (typically the point of fixation), while covert attention refers to the current focus of cognitive processing. For example, it is possible that visual recognition occurs at a given location even though that location is not overtly attended (i.e., fixated), as Heyman, Montemayor, and Grisanzio (2017) show. Furthermore, there is evidence that viewers can classify visual scenes (i.e., perceive the gist of a scene) in the absence of overt attention (Li, VanRullen, Koch, & Perona, 2002).…”
Section: Attention In Humans and Machinesmentioning
confidence: 94%