Visual search involves the matching of visual input to a "search template," an internal representation of task-relevant information. The present study investigated the contents of the search template during visual search for object categories in natural scenes, for which low-level features do not reliably distinguish targets from nontargets. Subjects were cued to detect people or cars in diverse photographs of real-world scenes. On a subset of trials, the cue was followed by task-irrelevant stimuli instead of scenes, directly followed by a dot that subjects were instructed to detect. We hypothesized that stimuli that matched the active search template would capture attention, resulting in faster detection of the dot when presented at the location of a template-matching stimulus. Results revealed that silhouettes of cars and people captured attention irrespective of their orientation (0°, 90°, or 180°). Interestingly, strong capture was observed for silhouettes of category-diagnostic object parts, such as the wheel of a car. Finally, attentional capture was also observed for silhouettes presented at locations that were irrelevant to the search task. Together, these results indicate that search for familiar object categories in real-world scenes is mediated by spatially global search templates that consist of view-invariant shape representations of category-diagnostic object parts.
General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
An imbalance between top-down and bottom-up processing on perception (specifically, over-reliance on top-down processing) can lead to anomalous perception, such as illusions. One factor that may be involved in anomalous perception is visual mental imagery, which is the experience of "seeing" with the mind's eye. There are vast individual differences in self-reported imagery vividness, and more vivid imagery is linked to a more sensory-like experience. We, therefore, hypothesized that susceptibility to anomalous perception is linked to individual imagery vividness. To investigate this, we adopted a paradigm that is known to elicit the perception of faces in pure visual noise (pareidolia). In four experiments, we explored how imagery vividness contributes to this experience under different response instructions and environments. We found strong evidence that people with more vivid imagery were more likely to see faces in the noise, although removing suggestive instructions weakened this relationship. Analyses from the first two experiments led us to explore confidence as another factor in pareidolia proneness. We, therefore, modulated environment noise and added a confidence rating in a novel design. We found strong evidence that pareidolia proneness is correlated with uncertainty about real percepts. Decreasing perceptual ambiguity abolished the relationship between pareidolia proneness and both imagery vividness and confidence. The results cannot be explained by incidental face-like patterns in the noise, individual variations in response bias, perceptual sensitivity, subjective perceptual thresholds, viewing distance, testing environments, motivation, gender, or prosopagnosia. This indicates a critical role of mental imagery vividness and perceptual uncertainty in anomalous perceptual experience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.