Recent attempts to improve on the quality of psychological research focus on good practices required for statistical significance testing. The scrutiny of theoretical reasoning, though superordinate, is largely neglected, as exemplified here in a common misunderstanding of mediation analysis. Although a test of a mediation model X ➔ Z ➔ Y is conditional on the premise that the model applies, alternative mediators Z′, Z″, Z‴ etc. remain untested, and other causal models could underlie the correlation between X, Y, Z, researchers infer from a single significant mediation test that they have identified the true mediator. A literature search of all mediation analyses published in 2015 in Sciencedirect shows that the vast majority of studies neither consider alternative causal models nor alternative mediator candidates. Ignoring that mediation analysis is conditional on the truth of the focal mediation model, they pretend to have demonstrated that Z mediates the influence of X on Y. Recommendations are provided for how to overcome this dissatisfying state of affairs.
Previous research on advice taking has explained the failure to exploit collective wisdom in terms of the egocentric underweighting of advice provided by independent others. The present research is concerned with an opposite and more radical source of irrational advice taking, namely, the failure to critically assess the validity of advice due to metacognitive myopia. Participants could use the advice of one or two experts when estimating health risks. They read sketches of the study samples that experts had drawn to estimate conditional probabilities (e.g., of HIV‐given drug addiction). Whether samples were valid or seriously biased, subsequent judgments were strongly affected by any advice (Experiment 1). Uncritical reliance on any advice persisted when participants were sensitized to the contrast of valid and invalid advice in a repeated measures design (Experiment 2), when participants themselves believed advice not to be valid (Experiment 3), and even after full debriefing about invalid advice (Experiment 4). Lay advice exerted a similar influence as expert advice (Experiment 5). Although these provocative results are independent of numeracy and consensus (Experiment 6), they highlight the impact of metacognitive myopia as an impediment of social rationality.
Going beyond the origins of cognitive biases, which have been the focus of continued research, the notion of metacognitive myopia refers to the failure to monitor, control, and correct for biased inferences at the metacognitive level. Judgments often follow the given information uncritically, even when it is easy to find out or explicitly explained that information samples are misleading or invalid. The present research is concerned with metacognitive myopia in judgments of change. Participants had to decide whether pairs of binomial samples were drawn from populations with decreasing, equal, or increasing proportions p of a critical feature. Judgments of p changes were strongly affected by changes in absolute sample size n, such that only increases (decreases) in p that came along with increasing (decreasing) n were readily detected. Across 4 experiments these anomalies persisted even though the distinction of p and n was strongly emphasized through outcome feedback and full debriefing (Experiment 1-4), simultaneous presentation (Experiments 2-4), and recoding of experienced samples into descriptive percentages (Experiment 3-4). In Experiment 4, a joint attempt was made by 10 scientists working in 7 different institutions to develop an effective debiasing training, suggesting how multilab-collaboration might improve the quality of science in the early stage of operational research designing. Despite significant improvements in change judgments, debiasing treatments did not eliminate the anomalies. Possible ways of dealing with the metacognitive deficit are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.