2017
DOI: 10.1101/207076
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meaning Guides Attention in Real-World Scene Images: Evidence from Eye Movements and Meaning Maps

Abstract: We compared the influences of meaning and salience on attentional guidance in scenes. Meaning was captured by "meaning maps" representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated the degree to which meaning versus image salience predicted human viewers' spatial distribution of attention over scenes, with attention operationalized as duration-weighted fi… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
19
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(20 citation statements)
references
References 67 publications
1
19
0
Order By: Relevance
“…The full patch stimulus set consisted of 31,500 unique fine patches (87-pixel diameter) and 11,340 unique coarse patches (205-pixel diameter), for a total of 42,840 scene patches. The optimal meaning-map grid density for each patch size was previously determined by simulating the recovery of known image properties as reported in Henderson & Hayes (2018).…”
Section: Meaning Mapsmentioning
confidence: 99%
“…The full patch stimulus set consisted of 31,500 unique fine patches (87-pixel diameter) and 11,340 unique coarse patches (205-pixel diameter), for a total of 42,840 scene patches. The optimal meaning-map grid density for each patch size was previously determined by simulating the recovery of known image properties as reported in Henderson & Hayes (2018).…”
Section: Meaning Mapsmentioning
confidence: 99%
“…In addition, it is known that salience has little predictive power when task demands are important (Tatler, Hayhoe, Land, & Ballard, ). In a recent paper, Henderson and Hayes () showed that meaning is a better predictor of where people look than salience. This is in line with earlier work showing the importance of meaningful factors such as text (Wang & Pomplun, ) and semantics (Nyström & Holmqvist, ).…”
Section: Introductionmentioning
confidence: 99%
“…To determine whether gaze reinstatement (i.e., the extent to which encoding gaze patterns were recapitulated at retrieval) was related to gaze patterns (i.e., the types of information viewed) at encoding, we derived two measures to capture the extent to which individual gaze patterns at encoding reflected ‘salient’ image regions. Given that ‘saliency’ can be defined by both bottom-up (e.g., bright) and top-down (e.g., meaningful) image features, with the latter generally outperforming the former in predictive models (e.g., Henderson & Hayes, 2018; O’Connell & Walther, 2015), we computed two saliency maps for each image using the Saliency Toolbox (visual saliency map, reflecting bottom-up stimulus features) and aggregated participant data (informational saliency map, reflecting bottom-up and top-down features). Gaze patterns for each participant for each image were compared to both the visual and informational saliency maps, yielding two saliency scores.…”
Section: Resultsmentioning
confidence: 99%
“…To further interrogate the nature of information represented in the scanpath, we additionally correlated gaze reinstatement with measures of visual (i.e., stimulus-driven; bottom-up) and informational (i.e., participant-driven; bottom-up and top-down) saliency. Given that prior work has revealed a significant role for top-down features (e.g., meaning, Henderson & Hayes, 2018; scene content, O’Connell & Walther, 2015) in guiding eye movements, above and beyond bottom-up image features (e.g., luminance, contrast, Itti & Koch, 2000), we hypothesized that gaze reinstatement would be related particularly to the viewing of informationally salient image regions. Finally, to uncover the neural correlates of functional gaze reinstatement, we analyzed neural activity patterns at encoding, both across the whole-brain and in memory-related regions of interest (i.e., HPC, PPA, see Liu et al, 2020), to identify brain regions that (1) predicted subsequent gaze reinstatement at retrieval, and (2) showed overlapping subsequent gaze reinstatement and subsequent memory effects.…”
Section: Introductionmentioning
confidence: 99%