2018
DOI: 10.1007/s00426-018-1084-6
|View full text |Cite
|
Sign up to set email alerts
|

Less imageable words lead to more looks to blank locations during memory retrieval

Abstract: People revisit spatial locations of visually encoded information when they are asked to retrieve that information, even when the visual image is no longer present. Such "looking at nothing" during retrieval is likely modulated by memory load (i.e., mental effort to maintain and reconstruct information) and the strength of mental representations. We investigated whether words that are more difficult to remember also lead to more looks to relevant, blank locations. Participants were presented four nouns on a two… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
20
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(28 citation statements)
references
References 80 publications
3
20
0
1
Order By: Relevance
“…Consistent with previous findings of gaze reinstatement (Bone et al, 2018;Foulsham & Kingstone, 2013;Holm & Mäntylä, 2007;Johansson & Johansson, 2013;Kumcu & Thompson, 2018;Laeng et al, 2014;Scholz et al, 2016; for review, see Wynn, Shen, et al, 2019), retrievalrelated EMs were more similar to the EMs enacted during encoding of the same (old) or similar (lure) image than to the EMs enacted during encoding of other images, suggesting that they reflect image-specific memory. In line with our predictions, gaze reinstatement was significantly greater than chance at all levels of test probe degradation, indicating that given an incomplete cue, EMs facilitate reactivation of a specific item representation from memory.…”
Section: Discussionsupporting
confidence: 87%
“…Consistent with previous findings of gaze reinstatement (Bone et al, 2018;Foulsham & Kingstone, 2013;Holm & Mäntylä, 2007;Johansson & Johansson, 2013;Kumcu & Thompson, 2018;Laeng et al, 2014;Scholz et al, 2016; for review, see Wynn, Shen, et al, 2019), retrievalrelated EMs were more similar to the EMs enacted during encoding of the same (old) or similar (lure) image than to the EMs enacted during encoding of other images, suggesting that they reflect image-specific memory. In line with our predictions, gaze reinstatement was significantly greater than chance at all levels of test probe degradation, indicating that given an incomplete cue, EMs facilitate reactivation of a specific item representation from memory.…”
Section: Discussionsupporting
confidence: 87%
“…Given that image regions high in visual [62][63][64] or semantic [65] saliency are likely to be visited first during encoding, it is perhaps not surprising that these regions are also likely to be visited first during retrieval, as they facilitate the matching of present input with stored memory representations (see Fig 2., bottom right). Indeed, preservation of temporal order in initial fixations has been widely reported in image recognition tasks [36,45,46,64, see also, 38,]. Critically however, reinstatement of spatial locations and temporal order are often confounded.…”
Section: Temporal Reinstatementmentioning
confidence: 99%
“…Other studies have similarly shown that the temporal sequence of encoding fixations is not recapitulated in full during retrieval. Evidence of reinstatement of previously sampled spatial regions is similarly varied, with some studies defining spatial similarity based on screen quadrants [e.g., 39,[44][45][46] and others using more strictly defined grid patterns [e.g., 29,38] or experimenter-defined areas of interest [e.g., 27,44]. Despite wide variance in definitions and measures of scanpath similarity, multiple studies have found evidence for some amount of eye movement-based reinstatement during repeated stimulus presentations and retrieval.…”
Section: Scanpath Theorymentioning
confidence: 99%
See 1 more Smart Citation
“…During retrieval people fixate on empty locations that have been associated with task-relevant stimuli during encoding (Altmann 2004;Bone et al 2019;Brandt and Stark 1997;Johansson et al 2012;Johansson et al 2006; Kumcu and Thompson 2018;Laeng et al 2014;Laeng and Teodorescu 2002;Richardson and Spivey 2000;Scholz et al 2018;Scholz et al 2016;Spivey and Geng 2001). For instance, Spivey and Geng (2001) found that when participants were questioned about an object, they gazed back to empty locations on the screen corresponding to those, where visual information was presented during encoding.…”
Section: Introductionmentioning
confidence: 99%