The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM: With motion, color memory came to be associated with the new locations of objects. However, robust binding to the original locations was found despite clear evidence that the objects had moved. This latter binding appears to constitute a scene-based component in VWM, which codes object location relative to the abstract spatial configuration of the display and is largely insensitive to the dynamic properties of objects.
In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant contextual cueing. However, expression of that learning was dependent on instantiating the original scene in which the learning occurred: Contextual cueing was disrupted when the repeated array was transferred to a different scene. Such scene-specific learning was not absolute, however. Under conditions of high scene variability, repeated search array were learned independently of the scene background. These data suggest that when a consistent environmental structure is available, spatial representations supporting visual search are organized hierarchically, with memory for functional sub-regions of an environment nested within a representation of the larger scene.
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.