Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.-Users may download and print one copy of any publication from the public portal for the purpose of private study or research-You may not further distribute the material or use it for any profit-making activity or commercial gain-You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Modern theories of moral judgment predict that both conscious reasoning and unconscious emotional influences affect the way people decide about right and wrong. In a series of experiments, we tested the effect of subliminal and conscious priming of disgust facial expressions on moral dilemmas. “Trolley-car”-type scenarios were used, with subjects rating how acceptable they found the utilitarian course of action to be. On average, subliminal priming of disgust facial expressions resulted in higher rates of utilitarian judgments compared to neutral facial expressions. Further, in replication, we found that individual change in moral acceptability ratings due to disgust priming was modulated by individual sensitivity to disgust, revealing a bi-directional function. Our second replication extended this result to show that the function held for both subliminally and consciously presented stimuli. Combined across these experiments, we show a reliable bi-directional function, with presentation of disgust expression primes to individuals with higher disgust sensitivity resulting in more utilitarian judgments (i.e., number-based) and presentations to individuals with lower sensitivity resulting in more deontological judgments (i.e., rules-based). Our results may reconcile previous conflicting reports of disgust modulation of moral judgment by modeling how individual sensitivity to disgust determines the direction and degree of this effect.
Researchers face many, often seemingly arbitrary choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of ‘researcher degrees of freedom’ aimed at obtaining statistical significance increases the likelihood of obtaining false positive results and overestimated effect sizes, and lowers the replicability of published results. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared two formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge registration (now called “OSF Preregistration”, http://osf.io/prereg/). The Prereg Challenge format provides detailed instructions, a guided workflow, and an independent review to confirm completeness; the “Standard” format has minimal direct guidance to give researchers flexibility for what to pre-specify. Results of comparing random samples of 53 preregistrations from each format indicate that neither format restricted all researcher degrees of freedom and the Prereg Challenge format performed better on restricting degrees of freedom than the “Standard” format. We also found a very low concordance rate among coders about the number of hypotheses (about 15%), suggesting that preregistration is difficult and an acquired skill, and registration formats that provide effective guidance will improve the quality of research.
In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of pre-registrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences, which both include a recommendation to do a formal power analysis, and (ii) a sample of pre-registrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the pre-registration. Therefore, we see ample room for improvements in the quality of the registrations and we offer several recommendations to do so.
This study assesses the extent of selective hypothesis reporting in psychological research by comparing the hypotheses found in a set of 459 preregistrations to the hypotheses found in the corresponding papers. We found that more than half of the preregistered studies we assessed contain omitted hypotheses (N = 224; 52.2%) or added hypotheses (N = 227; 56.8%), and about one-fifth of studies contain changed hypotheses (N = 82; 19%). We found only a small number of studies with demoted hypotheses (N = 2; 1%) and no studies with promoted hypotheses. In all, 59% of studies include at least one hypothesis in one or more of these categories, indicating a substantial bias in presenting and selecting hypotheses by researchers and/or reviewers/editors. Contrary to our expectations, we found that added hypotheses and changed hypotheses were not more likely to be statistically significant than non-selectively reported hypotheses. For the other types of selective hypothesis reporting, no powerful test of the relationship with statistical significance could be carried out. Finally, we found that replication studies were less likely to include selectively reported hypotheses than original studies. Thus, selective hypothesis reporting is problematically common in psychological research and may partly be explained by the fact that authors do not specifically and clearly enough formulate hypotheses in preregistrations and papers. We urge researchers, reviewers, and editors to ensure that hypotheses outlined in preregistrations are clearly formulated and accurately presented in the corresponding papers.
In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of preregistrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences in which a power analysis is advised, and (ii) a sample of preregistrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the preregistration. Therefore, we see ample room for improvements in the quality of the registrations and we offer recommendations to do so.
Moral judgments are not just the product of conscious reasoning, but also involve the integration of social and emotional information. Irrelevant disgust stimuli modulate moral judgments, with individual sensitivity determining the direction and size of effects across both hypothetical and incentive-compatible experimental designs. We investigated the neural circuitry underlying this modulation using fMRI in 19 individuals performing a moral judgment task with subliminal priming of disgust facial expressions. Our results indicate that individual changes in moral acceptability due to priming covaried with individual differences in activation within the dorsomedial prefrontal cortex (dmPFC). Further, whole-brain analyses identified changes in functional connectivity between the dmPFC and the temporal-parietal junction (TPJ). High sensitivity individuals showed enhanced functional connectivity between the TPJ and dmPFC, corresponding with deactivation in the dmPFC, and rating the moral dilemmas as more acceptable. Low sensitivity individuals showed the opposite pattern of results. Post-hoc, these findings replicated in the dorsal anterior cingulate cortex (daMCC), an adjacent region implicated in converting between objective and subjective valuation. This suggests a specific computational mechanism – that disgust stimuli modulate moral judgments by altering the integration of social information to determine the subjective valuation of the considered moral actions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.