52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
Black Americans are systematically undertreated for pain relative to white Americans. We examine whether this racial bias is related to false beliefs about biological differences between blacks and whites (e.g., "black people's skin is thicker than white people's skin"). Study 1 documented these beliefs among white laypersons and revealed that participants who more strongly endorsed false beliefs about biological differences reported lower pain ratings for a black (vs. white) target. Study 2 extended these findings to the medical context and found that half of a sample of white medical students and residents endorsed these beliefs. Moreover, participants who endorsed these beliefs rated the black (vs. white) patient's pain as lower and made less accurate treatment recommendations. Participants who did not endorse these beliefs rated the black (vs. white) patient's pain as higher, but showed no bias in treatment recommendations. These findings suggest that individuals with at least some medical training hold and may use false beliefs about biological differences between blacks and whites to inform medical judgments, which may contribute to racial disparities in pain assessment and treatment.racial bias | pain perception | health care disparities | pain treatment A young man goes to the doctor complaining of severe pain in his back. He expects and trusts that a medical expert, his physician, will assess his pain and prescribe the appropriate treatment to reduce his suffering. After all, a primary goal of health care is to reduce pain and suffering. Whether he receives the standard of care that he expects, however, is likely contingent on his race/ethnicity. Prior research suggests that if he is black, then his pain will likely be underestimated and undertreated compared with if he is white (1-10). The present work investigates one potential factor associated with this racial bias. Specifically, in the present research, we provide evidence that white laypeople and medical students and residents believe that the black body is biologically different-and in many cases, stronger-than the white body. Moreover, we provide evidence that these beliefs are associated with racial bias in perceptions of others' pain, which in turn predict accuracy in pain treatment recommendations. The current work, then, addresses an important social factor that may contribute to racial bias in health and health care.Extant research has shown that, relative to white patients, black patients are less likely to be given pain medications and, if given pain medications, they receive lower quantities (1-10). For example, in a retrospective study, Todd et al. (10) found that black patients were significantly less likely than white patients to receive analgesics for extremity fractures in the emergency room (57% vs. 74%), despite having similar self-reports of pain. This disparity in pain treatment is true even among young children. For instance, a study of nearly one million children diagnosed with appendicitis revealed that, relative to white pa...
We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance ( p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion ( p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.
Implicit preferences are malleable, but does that change last? We tested nine interventions (eight real and one sham) to reduce implicit racial preferences over time. In two studies with a total of 6,321 participants, all nine interventions immediately reduced implicit preferences. However, none were effective after a delay of several hours to several days. We also found that these interventions did not change explicit racial preferences and were not reliably moderated by motivations to respond without prejudice. Short-term malleability in implicit preferences does not necessarily lead to long-term change, raising new questions about the flexibility and stability of implicit preferences.Word Count: 100 Keywords: attitudes, racial prejudice, implicit social cognition, malleability, Implicit Association Test Full CitationLai, C. K., Skinner, A. L., Cooley, E., Murrar, S., Brauer, M., Devos, T., Calanchini, J., Xiao, Y. J., Pedram, C., Marshburn, C. K., Simon, S., Blanchar, J. C., Joy-Gaba, J. A., Conway, J., Redford, L., Klein, R. A., Roussos, G., Schellhaas, F. M. H., Burns, M., Hu, X., McLean, M. C., Axt, J. R., Asgari, S., Schmidt, K., Rubinstein., R, Marini, M., Rubichi, S., Shin,. J. L., & Nosek, B. A. (2016). Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of Experimental Psychology: General, 145, 1001-1016. REDUCING IMPLICIT RACIAL PREFERENCES 3 Reducing Implicit Racial Preferences: II. Intervention Effectiveness Across TimeEarly theories of implicit social cognition suggested that implicit associations were largely stable. These claims were supported by evidence that changes in conscious belief did not lead to corresponding changes in implicit associations (e.g., Devine, 1989;Wilson, Lindsey, & Schooler, 2000). The psychologist John Bargh referred to the stability of implicit cognitions as the "cognitive monster": "Once a stereotype is so entrenched that it becomes activated automatically, there is really little that can be done to control its influence" (p. 378, Bargh, 1999). This dominant view has changed over the past fifteen years to one of implicit malleability, with many studies finding that implicit associations are sensitive to lab-based interventions (for reviews, see Blair, 2002;Gawronski & Bodenhausen 2006;Lai, Hoffman, & Nosek, 2013). These interventions vary greatly in approach. In one, for example, participants are exposed to images of people who defy stereotypes (e.g., admired Black people / hated White people; Joy-Gaba & Nosek, 2010). In another, participants are given goals to override implicit biases (e.g., Mendoza, Gollwitzer, & Amodio, 2010;Stewart & Payne, 2008).In most of the research on implicit association change, the short-term malleability of associations is tested by administering an implicit measure immediately after the intervention. Studies examining long-term change in implicit associations are rare. In a meta-analysis on experiments to change implicit associations (Forscher, Lai, et al., 2016), only 22 (3.7%) of 585 studies ...
Using data from 217 research reports (N = 36,071, compared to 3,471 and 5,433 in previous meta-analyses), this meta-analysis investigated the conceptual and methodological conditions under which Implicit Association Tests (IATs) measuring attitudes, stereotypes, and identity correlate with criterion measures of intergroup behavior. We found significant implicit-criterion correlations (ICCs) and explicit-criterion correlations (ECCs), with unique contributions of implicit (β = .14) and explicit measures (β = .11) revealed by structural equation modeling. ICCs were found to be highly heterogeneous, making moderator analyses necessary. Basic study features or conceptual variables did not account for any heterogeneity: Unlike explicit measures, implicit measures predicted for all target groups and types of behavior, and implicit, but not explicit, measures were equally associated with behaviors varying in controllability and conscious awareness. However, ICCs differed greatly by methodological features: Studies with a declared focus on ICCs, standard IATs rather than variants, high-polarity attributes, behaviors measured in a relative (two categories present) rather than absolute manner (single category present), and high implicit-criterion correspondence (k = 13) produced a mean ICC of r = .37. Studies scoring low on these variables (k = 6) produced an ICC of r = .02. Examination of methodological properties-a novelty of this meta-analysis-revealed that most studies were vastly underpowered and analytic strategies regularly ignored measurement error. Recommendations, along with online applications for calculating statistical power and internal consistency (http://www.benedekkurdi.com/#iat), are provided to improve future studies on the implicitcriterion relationship. Open materials are available under https://osf.io/47xw8/.
Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures' effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior.
Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures’ effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior.
We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation in effect magnitudes across sample and setting. Each protocol was administered to approximately half of 125 samples and 15,305 total participants from 36 countries and territories. Using conventional statistical significance (p < .05), fifteen (54%) of the replications provided evidence in the same direction and statistically significant as the original finding. With a strict significance criterion (p < .0001), fourteen (50%) provide such evidence reflecting the extremely high powered design. Seven (25%) of the replications had effect sizes larger than the original finding and 21 (75%) had effect sizes smaller than the original finding. The median comparable Cohen’s d effect sizes for original findings was 0.60 and for replications was 0.15. Sixteen replications (57%) had small effect sizes (< .20) and 9 (32%) were in the opposite direction from the original finding. Across settings, 11 (39%) showed significant heterogeneity using the Q statistic and most of those were among the findings eliciting the largest overall effect sizes; only one effect that was near zero in the aggregate showed significant heterogeneity. Only one effect showed a Tau > 0.20 indicating moderate heterogeneity. Nine others had a Tau near or slightly above 0.10 indicating slight heterogeneity. In moderation tests, very little heterogeneity was attributable to task order, administration in lab versus online, and exploratory WEIRD versus less WEIRD culture comparisons. Cumulatively, variability in observed effect sizes was more attributable to the effect being studied than the sample or setting in which it was studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.