People's behaviors are often guided by valenced responses to objects in the environment. Beyond positive and negative evaluations, attitudes research has documented the importance of attitude strength--qualities of an attitude that enhance or attenuate its impact and durability. Although neuroscience research has extensively investigated valence, little work exists on other related variables like metacognitive judgments about one's attitudes. It remains unclear, then, whether the various indicators of attitude strength represent a single underlying neural process or whether they reflect independent processes. To examine this, we used functional MRI (fMRI) to identify the neural correlates of attitude strength. Specifically, we focus on ambivalence and certainty, which represent metacognitive judgments that people can make about their evaluations. Although often correlated, prior neuroscience research suggests that these 2 attributes may have distinct neural underpinnings. We investigate this by having participants make evaluative judgments of visually presented words while undergoing fMRI. After scanning, participants rated the degree of ambivalence and certainty they felt regarding their attitudes toward each word. We found that these 2 judgments corresponded to distinct brain regions' activity during the process of evaluation. Ambivalence corresponded to activation in anterior cingulate cortex, dorsomedial prefrontal cortex, and posterior cingulate cortex. Certainty, however, corresponded to activation in unique areas of the precuneus/posterior cingulate cortex. These results support a model treating ambivalence and certainty as distinct, though related, attitude strength variables, and we discuss implications for both attitudes and neuroscience research.
A key function of the medial temporal lobe (MTL) is to generate predictions based on prior experience (Bar, 2009). We propose that these MTL-generated predictions guide learning, such that predictions from memory influence memory itself. Considering this proposal within a context-based theory of learning and memory leads to the unique hypothesis that the act of predicting an event from the current context can enhance later memory for that event, even if the event does not actually occur. We tested this hypothesis using a novel paradigm in which the contexts of some stimuli were repeated during an incidental learning task, without the stimuli themselves being repeated. Results from 4 experiments show clear behavioral evidence in support of this hypothesis: Participants were more likely to remember once-presented items if the temporal contexts of those items were later repeated. However, this effect only occurred in learning environments where predictions could be helpful.
Evaluations of videotaped criminal confessions can be influenced by the camera perspective taken during recording. Interrogations and confessions recorded with the camera directing observers' visual attention onto the suspect lead to biased judgments of the suspect. Although a camera perspective that directs visual attention onto the suspect and interrogator equally appears to promote unbiased judgments, investigations to date have relied on videotapes that depict only Caucasian suspects and interrogators. We examined the possibility that even equal-focus videotapes may become problematic when the suspect is a minority (e.g., Chinese American or African American) and the interrogator is Caucasian. That is, to the extent that Caucasian observers are inclined to direct more of their attention onto minorities, an effect documented previously, we expected biased judgments of the suspect to also occur in equal-focus videotapes. Three experiments provided evidence of this racial salience bias. Implications are discussed, including a practical way of avoiding the bias.
The decision to approach or avoid an unfamiliar person is based in part on one’s evaluation of facial expressions. Individuals with Williams syndrome (WS) are characterized in part by an excessive desire to approach people, but they display deficits in identifying facial emotional expressions. Likert-scale ratings are generally used to examine approachability ratings in WS, but these measures only capture an individual’s final approach/avoid decision. The present study expands on previous research by utilizing mouse-tracking methodology to visually display the nature of approachability decisions via the motor movement of a computer mouse. We recorded mouse movement trajectories while participants chose to approach or avoid computer-generated faces that varied in terms of trustworthiness. We recruited 30 individuals with WS and 30 chronological age-matched controls (mean age = 20 years). Each participant performed 80 trials (20 trials each of four face types: mildly and extremely trustworthy; mildly and extremely untrustworthy). We found that individuals with WS were significantly more likely than controls to choose to approach untrustworthy faces. In addition, WS participants considered approaching untrustworthy faces significantly more than controls, as evidenced by their larger maximum deviation, before eventually choosing to avoid the face. Both the WS and control participants were able to discriminate between mild and extreme degrees of trustworthiness and were more likely to make correct approachability decisions as they grew older. These findings increase our understanding of the cognitive processing that underlies approachability decisions in individuals with WS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.