The crisis of confidence in psychology has prompted vigorous and persistent debate in the scientific community concerning the veracity of the findings of psychological experiments. This discussion has led to changes in psychology's approach to research, and several new initiatives have been developed, many with the aim of improving our findings. One key advancement is the marked increase in the number of replication studies conducted. We argue that while it is important to conduct replications as part of regular research protocol, it is neither efficient nor useful to replicate results at random. We recommend adopting a methodical approach toward the selection of replication targets to maximize the impact of the outcomes of those replications, and minimize waste of scarce resources. In the current study, we demonstrate how a Bayesian re-analysis of existing research findings followed by a simple qualitative assessment process can drive the selection of the best candidate article for replication.
The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75 .
In the replication crisis in psychology, a “tone debate” has developed. It concerns the question of how to conduct scientific debate effectively and ethically. How should scientists give critique without unnecessarily damaging relations? The increasing use of Facebook and Twitter by researchers has made this issue especially pressing, as these social technologies have greatly expanded the possibilities for conversation between academics, but there is little formal control over the debate. In this article, we show that psychologists have tried to solve this issue with various codes of conduct, with an appeal to virtues such as humility, and with practices of self-transformation. We also show that the polemical style of debate, popular in many scientific communities, is itself being questioned by psychologists. Following Shapin and Schaffer’s analysis of the ethics of Robert Boyle’s experimental philosophy in the 17th century, we trace the connections between knowledge, social order, and subjectivity as they are debated and revised by present-day psychologists.
The crisis of confidence in the social sciences has many corollaries which impact our research practices. One of these is a push towards maximal and mechanical objectivity in quantitative research. This stance is reinforced by major journals and academic institutions that subtly yet certainly link objectivity with integrity and rigor. The converse implication of this may be an association between subjectivity and low quality. Subjectivity is one of qualitative methodology’s best assets, however. In qualitative methodology, that subjectivity is often given voice through reflexivity. It is used to better understand our own role within the research process, and is a means through which the researcher may oversee how they influence their research. Given that the actions of researchers have led to the poor reproducibility characterising the crisis of confidence, it is worthwhile to consider whether reflexivity can help improve the validity of research findings in quantitative psychology. In this report, we describe a combination approach of research: the data of a series of interviews helps us elucidate the link between reflexive practice and quality of research, through the eyes of practicing academics. Through our exploration of the position of the researcher in their research, we shed light on how the reflections of the researcher can impact the quality of their research findings, in the context of the current crisis of confidence. The validity of these findings is tempered, however, by limitations to the sample, and we advise caution on the part of our audience in their reading of our conclusions.
The crisis of confidence has played a primary role in undermining the trust researchers place in the findings of their peers, and our beliefs about the credibility of research results. Thus, the importance of increasing trust in credible reported research is paramount. Incentives such as preregistration are aimed at establishing a more trustworthy scientific literature, in that they help prevent various questionable research practices. As it stands, however, no empirical evidence exists demonstrating that preregistration does increase trust. Indeed, the objective merits of preregistration greatly lose in value if a researcher's subjective assessment of the value of preregistration does not align. Additionally, the picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. The following proposal describes how we aim to test the extent to which preregistration increases the trust of participants in the reported outcomes. We also aim to assess how familiarity with another researcher might influence trust. We expect that preregistration increases researchers' trust in findings, relative to no preregistration, and that registered reporting increases trust more than preregistration alone. We also expect that familiarity enhances trust judgments to some extent, however we do not have specific expectations regarding the nature of this effect. We therefore include familiarity as an exploratory effect in our analyses. The OSF page for this registered report proposal and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.