Registered Reports are a form of empirical publication in which study proposals are peer reviewed and pre-accepted before research is undertaken. By deciding which articles are published based on the question, theory and methods, Registered Reports offer a remedy for a range of reporting and publication biases. Here, we reflect on the history, progress and future prospects of the Registered Reports initiative and offer practical guidance for authors, reviewers and editors. We review early evidence that Registered Reports are working as intended, while at the same time acknowledging that they are not a universal solution for irreproducibility. We also consider how the policies and practices surrounding Registered Reports are changing, or must change in the future, to address limitations and adapt to new challenges. We conclude that Registered Reports are promoting reproducibility, transparency and self-correction across disciplines and may help reshape how society evaluates research and researchers.
Registered Reports are a form of empirical journal article in which study proposals are peer reviewed and pre-accepted before research is undertaken. By deciding which articles are published based on the question, theory, and proposed methods, Registered Reports offer a powerful remedy for a range of reporting and publication biases. Here we reflect on the history, progress and future prospects of the Registered Reports initiative, and also offer practical guidance for authors, reviewers, and editors encountering the format for the first time. While the key ingredients of pre-study review and results-blind acceptance are far from novel – and are already adopted independently in a variety of contexts – Registered Reports are the first mechanism to combine them into a mainstream policy that has won appeal with multiple stakeholders in the research process. We review early evidence that Registered Reports are working as intended, while at the same acknowledging that they are not a universal solution for irreproducibility. We also consider how the policies and practices surrounding Registered Reports are changing, or must change in future, to address limitations and adapt to new challenges. In spite of these caveats, we conclude that Registered Reports are promoting reproducibility, transparency and self-correction across a wide range of disciplines, and may help reshape how society evaluates research and researchers.
Recently, there has been a growing emphasis on embedding open and reproducible approaches into research. One essential step in accomplishing this larger goal is to embed such practices into undergraduate and postgraduate research training. However, this often requires substantial time and resources to implement. Also, while many pedagogical resources are regularly developed for this purpose, they are not often openly and actively shared with the wider community. The creation and public sharing of open educational resources is useful for educators who wish to embed open scholarship and reproducibility into their teaching and learning. In this article, we describe and openly share a bank of teaching resources and lesson plans on the broad topics of open scholarship, open science, replication, and reproducibility that can be integrated into taught courses, to support educators and instructors. These resources were created as part of the Society for the Improvement of Psychological Science (SIPS) hackathon at the 2021 Annual Conference, and we detail this collaborative process in the article. By sharing these open pedagogical resources, we aim to reduce the labour required to develop and implement open scholarship content to further the open scholarship and open educational materials movement.
In this Registered Report, we assessed the utility of the affective priming paradigm (APP) as an indirect measure of food attitudes and related choice behaviour in two separate cohorts. Participants undertook a speeded evaluative categorization task in which target words were preceded by food primes that differed in terms of affective congruence with the target, explicit liking (most liked or least liked), and healthiness (healthy or unhealthy). Non-food priming effects were tested as a manipulation check, and the relationship between food priming effects and impulsive choice behaviour was also investigated using a binary food choice task. As predicted, priming effects were observed for both healthy and unhealthy foods, but there was no difference in the magnitude of these effects. This may suggest that the paradigm is most sensitive to affective, but not cognitive, components of attitudes (i.e., healthiness), but alternative theoretical explanations and implications of this finding are discussed. Food and nonfood priming effects were observed in both reaction time (RT) and error rate (ER) data, but contrary to expectations, we found no association between food RT priming effects and choice behaviour. All findings from confirmatory analyses regarding RT and ER priming effects, and the absence of the expected correlations between priming effects and impulsive food choices, were successfully replicated in the online cohort of participants. Overall, this study confirms the robustness of the APP as an indirect measure of food liking and raises questions about its applied value for research of eating behaviours.
Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines ( N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500–1200 words) and time-to-submit (median 12, IQR 4–26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400–550 words) and time-to-submit (median 4, IQR 4–6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.
Participant crowdsourcing platforms (e.g., MTurk, Prolific) offer numerous advantages to addiction science, permitting access to hard-to-reach populations and enhancing the feasibility of complex experimental, longitudinal, and intervention studies. Yet these are met with equal concerns about participant nonnaivety, motivation, and careless responding, which if not considered can greatly compromise data quality. In this article, we discuss an alternative crowdsourcing avenue that overcomes these issues whilst presenting its own unique advantages-crowdsourcing researchers through big team science. First, we review several contemporary efforts within psychology (e.g., ManyLabs, Psychological Science Accelerator) and the benefits these would yield if they were more widely implemented in addiction science. We then outline our own consortium-based approach to empirical dissertations: a grassroots initiative that trains students in reproducible big team addiction science. In doing so, we discuss potential challenges and their remedies, as well as providing resources to help addiction researchers develop these initiatives. Through researcher crowdsourcing, together we can answer fundamental scientific questions about substance use and addiction, build a literature that is representative of a diverse population of researchers and participants, and ultimately achieve our goal of promoting better global health. Public Health SignificanceThis special issue on "crowdsourcing methods in addiction science" focuses on best practices and emerging research that uses online participant recruitment platforms. An alternative method is that of crowdsourcing researchers through big team science. In this article, we: (a) review contemporary researcher crowdsourcing efforts and the benefits these would bring to addiction science; (b) outline our approach to teaching students in reproducible big team addiction science; and (c) evaluate challenges and their remedies, as well as providing resources, to help others develop these initiatives.
Recently, there has been a growing emphasis on embedding open and reproducible approaches into research. One essential step in accomplishing this larger goal is to embed such practices into undergraduate and postgraduate research training. However, this often requires substantial time and resources to implement. Also, while many pedagogical resources are regularly developed for this purpose, they are not often openly and actively shared with the wider community. The creation and public sharing of open educational resources is useful for educators who wish to embed open scholarship and reproducibility into their teaching and learning. In this article, we describe and openly share a bank of teaching resources and lesson plans on the broad topics of open scholarship, open science, replication, and reproducibility that can be integrated into taught courses, to support educators and instructors. These resources were created as part of the Society for the Improvement of Psychological Science (SIPS) hackathon at the 2021 Annual Conference, and we detail this collaborative process in the article. By sharing these open pedagogical resources, we aim to reduce the labour required to develop and implement open scholarship content to further the open scholarship and open educational materials movement.
Inhibitory control training effects on behaviour (e.g. ‘healthier’ food choices) can be driven by changes in affective evaluations of trained stimuli, and theoretical models indicate that changes in action tendencies may be a complementary mechanism. In this preregistered study, we investigated the effects of food-specific go/no-go training on action tendencies, liking and impulsive choices in healthy participants. In the training task, energy-dense foods were assigned to one of three conditions: 100% inhibition (no-go), 0% inhibition (go) or 50% inhibition (control). Automatic action tendencies and liking were measured pre- and post-training for each condition. We found that training did not lead to changes in approach bias towards trained foods (go and no-go relative to control), but we warrant caution in interpreting this finding as there are important limitations to consider for the employed approach–avoidance task. There was only anecdotal evidence for an effect on food liking, but there was evidence for contingency learning during training, and participants were on average less likely to choose a no-go food compared to a control food after training. We discuss these findings from both a methodological and theoretical standpoint and propose that the mechanisms of action behind training effects be investigated further.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.