2022
DOI: 10.1167/jov.22.1.10
|View full text |Cite
|
Sign up to set email alerts
|

Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 93 publications
0
4
0
Order By: Relevance
“…When the sampled information from the periphery and the fovea is discrepant (e.g., when the saccade target is displaced or exchanged with another object), the visual system can segregate pre-and postsaccadic information (e.g., Atsma et al, 2016;Demeyer et al, 2010;Laurin et al, 2021;Tas et al, 2012;Tas et al, 2021). Furthermore, in some cases, the visual system samples object information with only peripheral vision (Treisman, 1986), and visual search is surprisingly unaffected by blocking foveal vision (David et al, 2021;Nuthmann, 2014;Nuthmann & Canas-Bajo, 2022). According to the transsaccadic feature prediction mechanism (Herwig & Schneider, 2014), object recognition and visual search is supported by predictions based on previous associations of peripheral and foveal information of objects.…”
Section: Introductionmentioning
confidence: 99%
“…When the sampled information from the periphery and the fovea is discrepant (e.g., when the saccade target is displaced or exchanged with another object), the visual system can segregate pre-and postsaccadic information (e.g., Atsma et al, 2016;Demeyer et al, 2010;Laurin et al, 2021;Tas et al, 2012;Tas et al, 2021). Furthermore, in some cases, the visual system samples object information with only peripheral vision (Treisman, 1986), and visual search is surprisingly unaffected by blocking foveal vision (David et al, 2021;Nuthmann, 2014;Nuthmann & Canas-Bajo, 2022). According to the transsaccadic feature prediction mechanism (Herwig & Schneider, 2014), object recognition and visual search is supported by predictions based on previous associations of peripheral and foveal information of objects.…”
Section: Introductionmentioning
confidence: 99%
“…To check for the presence of smooth pursuit in each event classified as a fixation, we determined the Euclidian distances between the gaze coordinates of the last sample of the previous saccade and the first sample of the next saccade (cf. Hutson et al, 2017 ; Nuthmann & Canas-Bajo, 2022 ). We then compared the distances with those observed in Experiment 2 where slide shows of single frames from the videos were used.…”
Section: Methodsmentioning
confidence: 98%
“…However, in real crime scenarios the protagonists and the objects they hold are likely to move. Recent eye-tracking research has demonstrated that motion in dynamic scenes leads to changes in viewing patterns, the degree of which depends on the viewing task (see Nuthmann & Canas-Bajo, 2022 , for a review). Most relevantly, Dorr et al ( 2010 ) found differences in gaze behavior for natural versus stop-motion movies.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, as one of the pioneering studies using VR in the context of gaze tracking demonstrated, the allocation of gaze towards objects can be linked to broader task-related concepts, such as approach and avoidance behavior, in realistic contexts (Rothkopf et al, 2007). VR setups usually allow for a larger field of view than standard laboratory displays, which -in addition to having the head and the body free to move -might be critical, as peripheral vision contributes substantially to search behavior in natural-scene viewing (Nuthmann & Canas-Bajo, 2022) and realworld tasks (Vater et al, 2022, for a review). At least for the free exploration of indoor spaces, gaze allocation in VR is remarkably similar to real-world behavior (Drewes et al, 2021), provided the VR control does not interfere with gaze (Feder et al, 2022).…”
Section: Introductionmentioning
confidence: 99%