In a series of experiments, we investigated the ubiquity of confirmation bias in cognition by measuring whether visual selection is prioritized for information that would confirm a proposition about a visual display. We show that attention is preferentially deployed to stimuli matching a target template, even when alternate strategies would reduce the number of searches necessary. We argue that this effect is an involuntary consequence of goal-directed processing, and show that it can be reduced when ample time is provided to prepare for search. These results support the notion that capacity-limited cognitive processes contribute to the biased selection of information that characterizes confirmation bias. (PsycINFO Database Record
Despite decades of research, the conditions under which shifts of attention to prior target locations are facilitated or inhibited remain unknown. This ambiguity is a product of the popular feature discrimination task, in which attentional bias is commonly inferred from the efficiency by which a stimulus feature is discriminated after its location has been repeated or changed. Problematically, these tasks lead to integration effects; effects of target-location repetition appear to depend entirely on whether the target feature or response also repeats, allowing for several possible inferences about orienting bias. To parcel out integration effects and orienting biases, we designed the present experiments to require localized eye movements and manual discrimination responses to serially presented targets with randomly repeating locations. Eye movements revealed consistent biases away from prior target locations. Manual discrimination responses revealed integration effects. These data collectively revealed inhibited reorienting and integration effects, which resolve the ambiguity and reconcile episodic integration and attentional orienting accounts.
Models of visual working memory (VWM) have benefitted greatly from the use of the delayed-matching paradigm. However, in this task, the ability to recall a probed feature is confounded with the ability to maintain the proper binding between the feature that is to be reported and the feature (typically location) that is used to cue a particular item for report. Given that location is typically used as a cue-feature, we used the delayed-estimation paradigm to compare memory for location to memory for color, rotating which feature was used as a cue and which was reported. Our results revealed several novel findings: 1) the likelihood of reporting a probed object's feature was superior when reporting location with a color cue than when reporting color with a location cue; 2) location report errors were composed entirely of swap errors, with little to no random location reports; and 3) both colour and location reports greatly benefitted from the presence of nonprobed items at test. This last finding suggests that it is uncertainty over the bindings between locations and colors at memory retrieval that drive swap errors, not at encoding. We interpret our findings as consistent with a representational architecture that nests remembered object features within remembered locations.
Feature Integration Theory proposed that attention shifted between target-like representations in our visual field. However, the nature of the representations that determined what was target like received less specification than the nature of the attention shifts. In recent years, visual search research has focused on the nature of the memory representations that we use to guide our shifts of attention. Sensitive measures of memory quality indicate that the template representations are remembered better than other, merely maintained, memories. Here we tested the hypothesis that we prepare for difficult search tasks by storing a higher fidelity target representation in working memory than we do when preparing for an easy search task. To test this hypothesis, we explicitly tested participants' memory of the target color they searched for (i.e., the attentional template) versus another memory that was not used to guide attention (i.e., an accessory representation) following blocks of searches with easy-to-find targets (i.e., distractors were homogeneously colored) to blocks of searches with hard-to-find targets (i.e., distractors were heterogeneously colored). Although homogeneous-distractor searches required minimal precision for distractor rejection, we found that templates were still remembered better than accessories, just like we found in a heterogeneous-distractor search. As a consequence, we suggest that stronger memories for templates likely reflects the need to decide whether new perceptual inputs match the template, and not an attempt to create a better template representation in anticipation of difficult searches.
Visual working memory (VWM) plays a central role in visual cognition, and current work suggests that there is a special state in VWM for items that are the goal of visual searches. However, whether the quality of memory for target templates differs from memory for other items in VWM is currently unknown. In this study, we measured the precision and stability of memory for search templates and accessory items to determine whether search templates receive representational priority in VWM. Memory for search templates exhibited increased precision and probability of recall, whereas accessory items were remembered less often. Additionally, while memory for Templates showed benefits when instances of the Template appeared in search, this benefit was not consistently observed for Accessory items when they appeared in search. Our results show that becoming a search template can substantially affect the quality of a representation in VWM.
When there is a relatively long interval between two successive stimuli that must be detected or localized, there are robust processing costs when the stimuli appear at the same location. However, when two successive visual stimuli that must be identified appear at the same location, there are robust same location costs only when the two stimuli differ in their responses; otherwise same location benefits are observed. Two separate frameworks that inhibited attentional orienting and episodic integration, respectively, have been proposed to account for these patterns. Recent findings hint at a possible reconciliation between these frameworks-requiring a response to an event in between two successive visual stimuli may unmask same stimulus and same location costs that are otherwise obscured by episodic integration benefits in identification tasks. We tested this hybrid account by integrating an intervening response event with an identification task that would otherwise generate the boundary between same location benefits and costs. Our results showed that the intervening event did not alter the boundary between location repetition benefits and costs nor did it reliably or unambiguously reverse the common stimulus-response repetition benefit. The findings delimit the usefulness of an intervening event for disrupting episodic integration, suggesting that effects from intervening response events are tenuous. The divide between attention and feature integration accounts is delineated in the context of methodological and empirical considerations.
Object-substitution masking (OSM) is a unique paradigm for the examination of object updating processes. However, existing models of OSM are underspecified with respect to the impact of object updating on the quality of target representations. Using two paradigms of OSM combined with a mixture model analysis we examine the impact of postperceptual processes on a target's representational quality within conscious awareness. We conclude that object updating processes responsible for OSM cause degradation in the precision of object representations. These findings contribute to a growing body of research advocating for the application of mixture model analysis to the study of how cognitive processes impact the quality (i.e., precision) of object representations.
Confirmation bias has recently been reported in visual search, where observers who were given a perceptual rule to test (e.g. BIs the p on a red circle?^) search stimuli that could confirm the rule stimuli preferentially (Rajsic, Wilson, & Pratt, Journal of Experimental Psychology: Human Perception and Performance, 41(5), [1353][1354][1355][1356][1357][1358][1359][1360][1361][1362][1363][1364] 2015). In this study, we compared the ability of concrete and abstract visual templates to guide attention using the visual confirmation bias. Experiment 1 showed that confirmatory search tendencies do not result from simple low-level priming, as they occurred when color templates were verbally communicated. Experiment 2 showed that confirmation bias did not occur when targets needed to be reported as possessing or not possessing the absence of a feature (i.e., reporting whether a target was on a nonred circle). Experiment 3 showed that confirmatory search also did not occur when search prompts referred to a set of visually heterogenous features (i.e., reporting whether a target on a colorful circle, regardless of the color). Together, these results show that the confirmation bias likely results from a matching heuristic, such that visual codes involved in representing the search goal prioritize stimuli possessing these features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.