2020
DOI: 10.1037/xlm0000837
|View full text |Cite
|
Sign up to set email alerts
|

Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.

Abstract: The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

6
38
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 20 publications
(46 citation statements)
references
References 59 publications
6
38
1
Order By: Relevance
“…Spatial constraint interacts with image salience to guide attention during visual search (Ehinger et al, 2009;Torralba et al, 2006). Given the correlation between image salience and meaning in real-world scenes (Elazary & Itti, 2008;Henderson, 2003;Henderson et al, 2007;Henderson & Hayes, 2017Rehrig et al, 2020;Tatler et al, 2011) and the finding that meaning accounts for most if not all of the shared variance in predicting eye fixations when the intercorrelation between meaning and saliency is controlled (Hayes & Henderson, 2019;Henderson & Hayes, 2017Peacock et al, n.d., 2019bPeacock et al, n.d., , 2019aPeacock et al, n.d., , 2020Rehrig et al, 2020), spatial constraint might also interact with meaning to guide eye movements.…”
Section: Combining Meaning and Surfacesmentioning
confidence: 99%
“…Spatial constraint interacts with image salience to guide attention during visual search (Ehinger et al, 2009;Torralba et al, 2006). Given the correlation between image salience and meaning in real-world scenes (Elazary & Itti, 2008;Henderson, 2003;Henderson et al, 2007;Henderson & Hayes, 2017Rehrig et al, 2020;Tatler et al, 2011) and the finding that meaning accounts for most if not all of the shared variance in predicting eye fixations when the intercorrelation between meaning and saliency is controlled (Hayes & Henderson, 2019;Henderson & Hayes, 2017Peacock et al, n.d., 2019bPeacock et al, n.d., , 2019aPeacock et al, n.d., , 2020Rehrig et al, 2020), spatial constraint might also interact with meaning to guide eye movements.…”
Section: Combining Meaning and Surfacesmentioning
confidence: 99%
“…The idea of an affordance foregrounds the formative role of sensory perception as the human-environment functional bind. Perception is always perception-for-action (Rehrig et al 2020;Varela et al 1991). Thus, the interplay of perception and action is co-constructive and mutually serving, with perception guiding action, even as action promotes perceptual vantage (Fiebelkorn and Kastner 2019;Maturana and Varela 1992;Schroeder et al 2010).…”
Section: Theoretical Perspective: Ecological Dynamicsmentioning
confidence: 99%
“…For example, we have generated what we call "contextualized" Meaning Maps by presenting exactly the same patches used in context-free Meaning Maps, but with each individual patch shown with its scene . We have also generated "Grasp Maps" using exactly the same patches with instructions focused on whether the region depicts an entity that can be grasped (Rehrig et al, 2020). Importantly, when the instructions are changed, subjects change their ratings to reflect the semantic features they are asked to rate, leading to different maps, even though the physical features are held constant.…”
mentioning
confidence: 99%
“…And critically, it is a simple matter to extend the "classic" context-free Meaning Map approach to include additional types of semantic features, as we have done (Peacock et al, 2019;Rehrig et al, 2020). Importantly, whereas Meaning Maps can easily be extended to investigate a variety of semantic features, it is far less clear whether deep learning models like DG2 can ever in principle capture object-scene semantic features, or indeed any type of semantic feature.…”
mentioning
confidence: 99%
See 1 more Smart Citation