2010
DOI: 10.1371/journal.pone.0014331
|View full text |Cite
|
Sign up to set email alerts
|

A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants

Abstract: BackgroundThis paper presents the first meta-analysis for the inter-rater reliability (IRR) of journal peer reviews. IRR is defined as the extent to which two or more independent reviews of the same scientific document agree.Methodology/Principal FindingsAltogether, 70 reliability coefficients (Cohen's Kappa, intra-class correlation [ICC], and Pearson product-moment correlation [r]) from 48 studies were taken into account in the meta-analysis. The studies were based on a total of 19,443 manuscripts; on average… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
129
2
3

Year Published

2010
2010
2016
2016

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 167 publications
(137 citation statements)
references
References 93 publications
3
129
2
3
Order By: Relevance
“…Building on these statements, the finding of this study -low (k) or reasonable (ICC) inter-rater reliability -should be interpreted not as an indication that the quality of the ACP peer review process is low but instead as a general characteristic of journal manuscript reviewing 10 that possibly has a positive effect on the predictive validity of manuscript selection decisions. In recent years, some ways to increase inter-rater reliability in journal peer review have been suggested, such as including a greater number of reviewers.…”
Section: Discussionmentioning
confidence: 79%
“…Building on these statements, the finding of this study -low (k) or reasonable (ICC) inter-rater reliability -should be interpreted not as an indication that the quality of the ACP peer review process is low but instead as a general characteristic of journal manuscript reviewing 10 that possibly has a positive effect on the predictive validity of manuscript selection decisions. In recent years, some ways to increase inter-rater reliability in journal peer review have been suggested, such as including a greater number of reviewers.…”
Section: Discussionmentioning
confidence: 79%
“…replication is under-emphasized, analysis and reporting standards can be lax (Simmons, Nelson, & Simonsohn, 2011), and conventional peer review is unreliable (Bornmann, Mutz, & Daniel, 2010). Thereʼs a tremendous incentive to develop tools and platforms that can help address such problems.…”
Section: Evaluating and Communicating Resultsmentioning
confidence: 99%
“…Since it is essentially the same researchers who publish and review, this further increases the workload. Second, the peer-review process per se has been increasingly debated, and we find a growing amount of studies showing problems related to the process, including bias and nepotism, as well as problems with interrater reliability, in the peer-based assessment of articles, research proposals, and evaluations of research institutions (e.g., Bornmann, 2011;Bornmann et al, 2013;Garcia et al, 2015;Lee et al, 2013;Wennerås & Wold, 1997). Not only open access, but also subscription-based journals have been forced to retract large numbers of articles (Steen et al, 2013; To trace the reactions and comments on Bohannon's article, the Open Access Tracking Project Primary (OATP Primary) was used to identify web documents, which reported and commented on Bohannon's article.…”
Section: Peer Review and The Recent Changes In Scholarly Publishingmentioning
confidence: 99%