2019
DOI: 10.31234/osf.io/6uep5
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions

Abstract: The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 5 publications
(5 reference statements)
4
6
0
Order By: Relevance
“…The rapid extraction of meaning-related information is consistent with behavioral work showing that the spatial distribution of meaningful scene features exerts an influence on even the initial shift of overt attention in real-world scenes Hayes, 2017, 2018;Hayes and Henderson, 2019;Peacock et al, 2020;Rehrig et al, 2020). Specifically, the 86 ms onset latency of the meaning-related activity observed in the present study is sufficiently fast to potentially influence even the earliest shifts of overt attention (Thorpe et al, 1996;Fabre-Thorpe et al, 2001;Gordon, 2004).…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…The rapid extraction of meaning-related information is consistent with behavioral work showing that the spatial distribution of meaningful scene features exerts an influence on even the initial shift of overt attention in real-world scenes Hayes, 2017, 2018;Hayes and Henderson, 2019;Peacock et al, 2020;Rehrig et al, 2020). Specifically, the 86 ms onset latency of the meaning-related activity observed in the present study is sufficiently fast to potentially influence even the earliest shifts of overt attention (Thorpe et al, 1996;Fabre-Thorpe et al, 2001;Gordon, 2004).…”
Section: Discussionsupporting
confidence: 89%
“…Several recent studies have shown that eye movement patterns are predicted better by meaning maps than by physical saliency (Henderson and Hayes, 2017;Henderson et al, 2019). This advantage has been observed across multiple tasks, including visual search (Hayes and Henderson, 2019), simple free viewing (Peacock et al, 2019a), and scene and action description (Henderson and Hayes, 2018;Rehrig et al, 2020). The predictive advantage of meaning maps is present even when the task is to count the number of physically salient scene regions (Peacock et al, 2019b).…”
Section: Introductionmentioning
confidence: 99%
“…Overall, the results are consistent with previous meaning mapping work using a traditional central fixation start location Hayes, 2017, 2018;Peacock et al, 2019a,b;Rehrig et al, 2020) in which we found that early eye movements were more related to meaning than saliency. The present findings verify that the advantage of meaning over salience observed by previous meaning mapping studies was not simply due to an advantage for meaning at scene centers induced by the use of an initial central fixation location.…”
Section: Early Fixation Analysessupporting
confidence: 92%
“…Recent work in real-world attentional guidance has shown that meaning maps representing the semantic features of local scene regions are more highly related to fixation distributions than are saliency maps representing image feature differences, a result that has been replicated across a number of viewing tasks Hayes, 2017, 2018;Hayes and Henderson, 2019b;Peacock et al, 2019a,b;Rehrig et al, 2020). However, centers of photographs may contain greater meaningful information and image features than in scene peripheries, and for that reason participants might strategically fixate centrally (Parkhurst et al, 2002;Tatler, 2007;Tseng et al, 2009;Bindemann, 2010;Rothkegal et al, 2017;van Renswoude et al, 2019), conflating whether meaning actually guides attention better than image salience or whether this phenomenon is due to central fixation bias.…”
Section: Discussionmentioning
confidence: 95%
See 1 more Smart Citation