2016
DOI: 10.3758/s13414-016-1111-x
|View full text |Cite
|
Sign up to set email alerts
|

Of “what” and “where” in a natural search task: Active object handling supports object location memory beyond the object’s identity

Abstract: Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
26
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 32 publications
(27 citation statements)
references
References 39 publications
(66 reference statements)
1
26
0
Order By: Relevance
“…Further, with every subsequent search within the same environment reaction times decreased, showing that participants profited from repeatedly searching the same environment during this active task 44 , 45 , compared to searches in 2D 46 48 . However, the influence of scene grammar was mediated by the different building blocks of a scene: small/movable (local) and large/generally stationary (global) objects, which we have come to call “anchors”.…”
Section: Discussionmentioning
confidence: 90%
“…Further, with every subsequent search within the same environment reaction times decreased, showing that participants profited from repeatedly searching the same environment during this active task 44 , 45 , compared to searches in 2D 46 48 . However, the influence of scene grammar was mediated by the different building blocks of a scene: small/movable (local) and large/generally stationary (global) objects, which we have come to call “anchors”.…”
Section: Discussionmentioning
confidence: 90%
“…Interestingly, we found that fixation map similarity during search failed to predict whether a scene would be correctly recognized later (Figure 5d). Based on previous research suggesting that viewing tasks may affect the extraction (Võ & Wolfe, 2012) and/or the retention (Maxcey-Richard & Hollingworth, 2013) of visual information and that top-down goals may have prioritized task-relevant information over task-irrelevant information (Draschkow & Võ, 2016;, we speculate that the visual search task prioritized search-related operations during search, which could increase memory for those specific objects, but doing so reduces the extraction and retention of overall scene visual information that is irrelevant to search but critical for incidental encoding of the scene in totality. When the visual search task is completed (i.e., the search object is found), however, the priority of encoding-related operations is normalized and incidental scene encoding resumes during free-viewing (i.e., the viewing time remaining after the search object has been found).…”
Section: Discussionmentioning
confidence: 99%
“…These results suggest that viewing tasks may affect the extraction (Võ & Wolfe, 2012) and/or the retention (Maxcey-Richard & Hollingworth, 2013) of visual information within fixations. In doing so, top-down goals may have prioritized task-relevant information over task-irrelevant information (Draschkow & Võ, 2016;. In addition, viewing tasks (e.g., visual search) may have contributed to better integration of bottom-up visual information and contextual scene semantics, leading to stronger memory representations than intentional encoding (Draschkow et al, 2014;Josephs et al 2016;Võ & Wolfe, 2015).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A large body of work in visual search has revealed many of the factors that influence the search efficiency, such as the stimulus features of the image, top down guidance, and scene semantics 1 . However, most of this work has been done using 2D displays on computer monitors, and only a relatively small number of studies have examined visual search in the natural world 2 4 . This is an important issue because the nature of the stimulus in standard experimental paradigms differs considerably from that in everyday experience 3 , 5 7 .…”
Section: Introductionmentioning
confidence: 99%