Prepublication peer review should be abolished. We consider the effects that such a change will have on the social structure of science, paying particular attention to the changed incentive structure and the likely effects on the behaviour of individual scientists. We evaluate these changes from the perspective of epistemic consequentialism. We find that where the effects of abolishing prepublication peer review can be evaluated with a reasonable level of confidence based on presently available evidence, they are either positive or neutral. We conclude that on present evidence abolishing peer review weakly dominates the status quo.
Social scientists use many different methods, and there are often substantial disagreements about which method is appropriate for a given research question. In response to this uncertainty about the relative merits of different methods, W. E. B. Du Bois advocated for and applied "methodological triangulation". This is to use multiple methods simultaneously in the belief that, where one is uncertain about the reliability of any given method, if multiple methods yield the same answer that answer is confirmed more strongly than it could have been by any single method. Against this, methodological purists believe that one should choose a single appropriate method and stick with it. Using tools from voting theory, we show Du Boisian methodological triangulation to be more likely to yield the correct answer than purism, assuming the Thanks to Natalie Ashton, Seamus Bradley, Clark Glymour, Chike Jeffers, Aidan Kestigian, Erich Kummerfeld, Christian List, Wendy Parker, Kevin Zollman, two anonymous reviewers, and audiences in Munich and London for helpful comments. RH and LKB acknowledge support from the National Science Foundation through grant SES 1254291. RH also acknowledges support from the Leverhulme Trust and the Isaac Newton Trust through an Early Career Fellowship. 123Synthese scientist is subject to some degree of diffidence about the relative merits of the various methods. This holds even when in fact only one of the methods is appropriate for the given research question.
Recent philosophical work has praised the reward structure of science, while recent empirical work has shown that many scientific results may not be reproducible. I argue that the reward structure of science incentivizes scientists to focus on speed and impact at the expense of the reproducibility of their work, thus contributing to the so-called reproducibility crisis. I use a rational choice model to identify a set of sufficient conditions for this problem to arise, and I argue that these conditions plausibly apply to a wide range of research situations. Currently proposed solutions will not fully address this problem.
The communist norm requires that scientists widely share the results of their work. Where did this norm come from, and how does it persist? Michael Strevens provides a partial answer to these questions by showing that scientists should be willing to sign a social contract that mandates sharing. However, he also argues that it is not in an individual credit-maximizing scientist's interest to follow this norm. I argue against Strevens that individual scientists can rationally conform to the communist norm, even in the absence of a social contract or other ways of socially enforcing the norm, by proving results to this effect in a game-theoretic model. This shows that the incentives provided to scientists through the priority rule are sufficient to explain both the origins and the persistence of the communist norm, adding to previous results emphasizing the benefits of the incentive structure created by the priority rule. * Thanks to
Peer review is often taken to be the main form of quality control on academic research. Usually journals carry this out. However, parts of maths and physics appear to have a parallel, crowd-sourced model of peer review, where papers are posted on the arXiv to be publicly discussed. In this paper we argue that crowd-sourced peer review is likely to do better than journal-solicited peer review at sorting papers by quality. Our argument rests on two key claims. First, crowd-sourced peer review will lead on average to more reviewers per paper than journal-solicited peer review. Second, due to the wisdom of the crowds, more reviewers will tend to make better judgements than fewer. We make the second claim precise by looking at the Condorcet jury theorem as well as two related jury theorems developed specifically to apply to peer review. 1 Introduction 2 Assumptions of Peer Review 3 Crowd-Sourcing More Reviewers 4 The Basic Condorcet Jury Theorem 5 A Jury Theorem for Reviewer Scores 6 A Jury Theorem for Reviewer Reasons 7 Replies to Potential Objections 7.1 Manipulation of reviewer scores? 7.2 Greater average competence in journal-solicited peer review? 7.3 Failures of independence in crowd-sourced peer review? 8 Conclusion
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.