2023
DOI: 10.1037/rev0000379
|View full text |Cite
|
Sign up to set email alerts
|

A dynamical scan-path model for task-dependence during scene viewing.

Abstract: In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 117 publications
0
13
0
Order By: Relevance
“…By implementing psychophysically uncovered mechanisms of attentional and oculomotor control, ScanDy allows to generate sequences of eye movements for any visual scene. Recent years have shown a growing interest in the simulation of time-ordered fixation sequences for static scenes (Tatler, Brockmole, and Carpenter, 2017; Malem-Shinitski et al, 2020; Schwetlick, Rothkegel, et al, 2020; Schwetlick, Backhaus, and Engbert, 2022; Kucharsky et al, 2021; Kümmerer, Bethge, and Wallis, 2022), as well as the frame-wise prediction of where humans tend to look on average when observing a dynamic scene (Molin, Etienne-Cummings, and Niebur, 2015; Min and Corso, 2019; Droste, Jiao, and Noble, 2020; Wang, Liu, et al, 2021). We are currently not aware of another computational model that is able to simulate time-resolved gaze positions for the full duration of dynamic scenes, analogous to human eye tracking data.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…By implementing psychophysically uncovered mechanisms of attentional and oculomotor control, ScanDy allows to generate sequences of eye movements for any visual scene. Recent years have shown a growing interest in the simulation of time-ordered fixation sequences for static scenes (Tatler, Brockmole, and Carpenter, 2017; Malem-Shinitski et al, 2020; Schwetlick, Rothkegel, et al, 2020; Schwetlick, Backhaus, and Engbert, 2022; Kucharsky et al, 2021; Kümmerer, Bethge, and Wallis, 2022), as well as the frame-wise prediction of where humans tend to look on average when observing a dynamic scene (Molin, Etienne-Cummings, and Niebur, 2015; Min and Corso, 2019; Droste, Jiao, and Noble, 2020; Wang, Liu, et al, 2021). We are currently not aware of another computational model that is able to simulate time-resolved gaze positions for the full duration of dynamic scenes, analogous to human eye tracking data.…”
Section: Discussionmentioning
confidence: 99%
“…Scanpath prediction in static scenes was pioneered by the seminal work of Itti, Koch, and Niebur (1998), who implemented the previously postulated concept of a saliency map (Koch and Ullman, 1985) algorithmically and suggested a strategy to sequentially select locations in the saliency map based on a “winner-take-all” and a subsequent “inhibition of return” mechanism. A more detailed model of attentional dynamics and saccadic selection in static scene viewing was proposed with the SceneWalk model family (Engbert et al, 2015; Schütt et al, 2017; Schwetlick, Rothkegel, et al, 2020; Schwetlick, Backhaus, and Engbert, 2022). These models predict the likelihood of moving the gaze to a certain position based on a foveated access to saliency information and a leaky memory process.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Given these properties, we tested the computational faithfulness of SEAM using the Markov Chain Monte Carlo (MCMC) sampling algorithm DREAM zs Laloy and Vrugt (2012) based on profile log-likelihoods and model parameter recovery, similar to the approach taken in Rabe et al (2021). The DREAM zs (Laloy & Vrugt, 2012;ter Braak & Vrugt, 2008;Vrugt et al, 2009) sampler has previously been successfully used with complex dynamical models of eye-movement control, including SWIFT for reading (Rabe et al, 2021) and SceneWalk for scene viewing (Schwetlick et al, 2022;Schwetlick et al, 2020).…”
Section: Simulation Studymentioning
confidence: 99%
“…Successful examples of applying data assimilation for visual tasks are, for example, Sce-neWalk (Schwetlick et al, 2022;Schwetlick et al, 2020) for scene viewing and SWIFT (Rabe et al, 2021;Seelig et al, 2020) for reading. There, each event of the sequence, x i , is a fixation.…”
Section: Sequential Likelihoodmentioning
confidence: 99%