2019
DOI: 10.1016/j.cell.2019.11.029
|View full text |Cite
|
Sign up to set email alerts
|

Universal Principled Review: A Community-Driven Method to Improve Peer Review

Abstract: Despite being a staple of our science, the process of pre-publication peer review has few agreedupon standards defining its goals or ideal execution. As a community of reviewers and authors, we assembled an evaluation format and associated specific standards for the process as we think it should be practiced. We propose that we apply, debate, and ultimately extend these to improve the transparency of our criticism and the speed with which quality data and ideas become public.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…Enlarging the pool of reviewers to potentially an entire scientific community and accelerating the whole process requires a standard for peer reviews [32]: for example some aspects might be taken over by AI assistants (such as the Artificial Intelligence Review Assistant (AIRA) [33] leaving to the reviewers the sole task of evaluating the content of a paper. Building smart contracts for peer reviews might accelerate this novel process of standardization.…”
Section: A New Community-driven Standard?mentioning
confidence: 99%
“…Enlarging the pool of reviewers to potentially an entire scientific community and accelerating the whole process requires a standard for peer reviews [32]: for example some aspects might be taken over by AI assistants (such as the Artificial Intelligence Review Assistant (AIRA) [33] leaving to the reviewers the sole task of evaluating the content of a paper. Building smart contracts for peer reviews might accelerate this novel process of standardization.…”
Section: A New Community-driven Standard?mentioning
confidence: 99%
“…This is difficult; some authors use independent peer review (e.g. Jarwal et al (2009)) but of course peer review itself is a fraught metric (see Brezis and Birukou 2020;Krummel et al 2019). We investigate test of time awards in the "An analysis of quantitative publication metrics" section; in some fields, retractions may also provide a useful signal.…”
Section: Quantitative Metrics As Proxy Indicatorsmentioning
confidence: 99%
“…Were we really protecting science by delaying a yes/no decision, or by rejecting solid but circumscribed work? This experience has built on previous attempts to limit reviewer demands and multiple resubmissions (Malhotra and Marder, 2015) and on a push to develop a reviewer compact that encourages our peers generally to approach manuscripts with the intention of seeing good data published (Krummel et al 2019). With support and even explicit guidance from journal editors, many of us requested additions more parsimoniously.…”
Section: Refining Publication Practices Is Possible and Desirablementioning
confidence: 99%