Observers viewed displays containing a variable number of-distractors-of one color and a target of another color. In some experiments, the target and distractors maintained their color from trial to trial; in others, they reversed unpredictably. Observers made a speeded two-choice judgment concerning either the presence, the color, or the shape of the odd-colored target. With only one exception, all of these conditions produced the same pattern of results: reaction times remained constant as the number of distractors increased. The exceptional result occurred when observers judged the shape of the odd-colored target and the color of the target and distractors reversed unpredictably. In this case, reaction times decreased as the number of distractors increased. These results are interpreted in terms of the attentional requirements of the different judgments and the mechanisms that guide attention. Figure 1 shows an odd target located among homogenous distractors. Two observations have been made concerning such displays. First, the odd element "pops out," that is, it immediately attracts attention. And second, the target is readily detected regardless of the number of distractors in the display (Donderi & Zelnicker, 1969;Egeth, Jonides, & Wall, 1972; Neisser, 1963;Treisman & Gelade, 1980). It has been assumed that these two observations are causally related: the target is easy to detect because it immediately summons attention (Duncan & Humphreys, 1989; Koch & Uliman, 1985; Yantis & Jonides, 1984).If we compare the predictions of models of attentional guidance with the results of detection experiments, this relationship is less clear. As detailed below, these models predict that under some circumstances the target should become easier to find as more distractors are added to the display. However, the results of detection experiments do not support this prediction. When the target and distractors have very different features, as in Figure 1, detection times are found to be unaffected by increasing numbers of distractors. When the target and distractors have similar features, detection times are found to increase with increasing numbers ofdistractors (Duncan & Humphreys, 1989). Thus, the slope of detection times plotted against number of distractors is usually either zero or positive, but rarely negative (for an exception, see Bacon & Egeth, 1991). Since the models predict a negative slope, these empirical results indicate that either the This work was supported by Grant 1F32 EYO6 155 from the NEL and Grant 83-0320 from the AFOSR. We are indebted to Jeremy Wolfe and Vera Maljkovic for their comments on an earlier version of this manuscript. K. Nakayama is now in the Department of Psychology at Harvard University. Correspondence should be addressed to M. J. Bravo, SmithKettlewell Eye Research Institute, 2232 Webster St., San Francisco, CA 94115; e-mail: mary@skivs.ski.org. models are incorrect or that they do not apply to target detection.Our hypothesis is that the models of attentional guidance are correct, but tha...
When searching for a target object, observers use an internal representation of the target's appearance as a search template. This study used naturalistic stimuli to examine the specificity of this template. Observers first learned several name-image pairs; they then participated in a search experiment in which the names served as cues and the images served as targets. To test whether the observers searched for the targets using an exact image template, we included targets that were transformations of the studied image and targets that belonged to the same subordinate-level category as the studied image. The same stimuli were also used in a search experiment involving image cues. The name cue and image cue experiments produced different patterns of results. Unlike image cues, name cues produced similar benefits for transformations of the studied images as for the studied images themselves. Also unlike image cues, names cues produced no benefit for members of the same subordinate-level category as the studied image. These results suggest that when observers are trained on an image, they develop a search template that is relatively specific for the image but still tolerant to changes in scale and orientation.
We propose a measure of clutter for real images that can be used to predict search times. This measure uses an efficient segmentation algorithm (P. Felzenszwalb & D. Huttenlocher, 2004) to count the number of regions in an image. This number is not uniquely defined, however, because it varies with the scale of segmentation. The relationship between the number of regions and the scale of segmentation follows a power law, and the exponent of the power law is similar across images. We fit power law functions to the multiple scale segmentations of 160 images. The power law exponent was set to the average value for the set of images, and the constant of proportionality was used as a measure of image clutter. The same 160 images were also used as stimuli in a visual search experiment. This scale-invariant measure of clutter accounted for about 40% of the variance in the visual search times.
An airport security worker searching a suitcase for a weapon is engaging in an especially difficult search task: the target is not well-specified, it is not salient, and it is not predicted by its context. Under these conditions, search may proceed item-by-item. In the experiment reported here we tested whether the items for this form of search are whole familiar objects. Our displays were composed of color photographs of ordinary objects, that were either uniform in color and texture (simple), or had two or more parts with different colors or textures (compound). The observer's task was to detect the presence of a target belonging to a broad category (food). We found that when the objects were presented in a sparse array, search times to find the target were similar for displays composed of simple and compound objects. But when the same objects were presented as dense clutter, search functions were steeper for displays composed of compound objects. We attribute this difference to the difficulty of segmenting compound objects in clutter: compared with simple objects, compound objects are less likely to be organized into a single object by bottom--up grouping processes. Our results indicate that while search rates in a sparse display may be determined by the number of objects, search rates in clutter are also affected by the number of object parts.
Recent evidence suggests that preattentive processing may not be limited to the analysis of simple stimulus features as previously suggested. To explore this idea a visual search task was used to test whether the shapes of several perceptual groups can be processed in parallel. Textured displays that give rise to strong perceptual grouping were used to create figures on a background. Search times for a target figure distinguished by a unique shape were found to be independent of the number of distractor figures in the display. This result indicates that perceptual groups may be processed in parallel and suggests an expanded role for preattentive processing in vision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.