Traditional models of visual search assume interitem similarity effects arise from within each feature dimension independently of other dimensions. In the present study, we examine whether distractordistractor effects also depend on feature conjunctions (i.e., whether feature conjunctions form a separate "feature" dimension that influences interitem similarity). Spatial frequency and orientation feature dimensions were used to generate distractors. In the bound condition, the number of distractors sharing the same conjunction of features was higher than that in the unbound condition, but the sharing of features within frequency and orientation dimensions was the same across conditions. The results showed that the target was found more efficiently in the bound than in the unbound condition, indicating that distractor -distractor similarity is also influenced by conjunctive representations.Since the early 1980s, the nature of attention in visual search has been extensively investigated by a large number of researchers. It is generally accepted that search efficiency is affected by two types of similarity: target -distractor similarity and distractor -distractor similarity. Wolfe, Cave, and Franzel (1989) demonstrated that a target could be found more efficiently when the target shared only one feature with distractors than when two features were shared in a triple conjunction search task. This is the effect of the target -distractor similarity. On the other hand, the effect of the distractordistractor similarity was reported by Duncan and Humphreys (1989). In their study, observers were required to look for a T among Ls. They found that search performance improved when distractors were displayed with the same orientation compared to when they were rotated randomly. In summary, search efficiency increased with decreasing target -distractor similarity and with increasing distractor -distractor similarity.Guided search model (GSM) is a prominent model of visual search, which can explain both the effects of target -distractor similarity and distractor -distractor similarity on search efficiency (Cave & Wolfe, 1990;Wolfe, 1994;Wolfe et al., 1989). GSM assumed that the visual input is decomposed through several feature dimensions, such as colour, orientation, or motion, and that target -distractor and distractor -distractor similarity are calculated in each feature dimension separately. An activation map, which guides PQJE206220