The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual changedetection response.The input to the human visual system consists primarily of a series of static snapshots-most lasting only a few hundred milliseconds-separated by blinks and saccades. It is often useful to compare information that was obtained from a previous snapshot and stored in visual working memory 1 (VWM) with the information that is available in the current snapshot. The purpose of the present study was to characterize the processes involved in this comparison.The comparison of VWM representations with sensory inputs is likely to be important for both low-level and high-level aspects of vision (for a detailed discussion, see Luck, in press). At a
Three experiments examined the visual memory representation supporting performance at long interstimulus intervals (ISIs) in an empty cell localization task. Two arrays of dots within a 4 ϫ 4 grid were displayed briefly in succession. One grid cell did not contain a dot in either array, and the task was to localize the empty cell. In Experiment 1, we replicated previous findings of recovery to high levels of performance at long ISIs. In Experiment 2, we tested whether figural grouping in visual short-term memory (VSTM) supports long-ISI performance by manipulating the complexity of the array pattern. Pattern complexity had no effect on empty cell localization at 0-msec ISI, suggesting dependence on high-capacity visible persistence, but there was a large simple pattern advantage at long ISIs, suggesting dependence on figural grouping in VSTM. Experiment 3 demonstrated that participants typically remember the empty cells of the first array, and not the dots, for comparison with Array 2.
Previous studies have proposed that attention is not necessary for detecting simple features but is necessary for binding them to spatial locations. The present study tested this hypothesis, using the N2pc component of the event-related potential waveform as a measure of the allocation of attention. A simple feature detection condition, in which observers reported whether a target color was present or not, was compared with feature-location binding conditions, in which observers reported the location of the target color. A larger N2pc component was observed in the binding conditions than in the detection condition, indicating that additional attentional resources are needed to bind a feature to a location than to detect the feature independently of its location. This finding supports theories of attention in which attention plays a special role in binding features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.