1984
DOI: 10.1037/0003-066x.39.1.22
|View full text |Cite
|
Sign up to set email alerts
|

Interrater agreement for journal manuscript reviews.

Abstract: Interrater agreement for journal manuscript reviews has often seemed unacceptably low when assessed using techniques such as the intraclass correlation, which depend on comparing error variance with the variance due to manuscripts. Such approaches are misleading because a high agreement coefficient depends on a large variance component for manuscripts. Most journals in the social sciences have very high rejection rates, reflecting a preponderance of poorquality manuscripts and leading to relatively low manuscr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
95
0

Year Published

1986
1986
2019
2019

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 178 publications
(98 citation statements)
references
References 19 publications
3
95
0
Order By: Relevance
“…Unfortunately, the ICC is dependent on the distribution of the couples' responses to the MCRPR. Finn's r (Finn, 1970) is an agreement index similar to the ICC that does not have this undesirable property and is recommended in cases in which the distribution of scores may not be normally distributed (Whitehurst, 1984). In this sample, the mean index was .52 (SD = .28).…”
Section: Methodsmentioning
confidence: 85%
“…Unfortunately, the ICC is dependent on the distribution of the couples' responses to the MCRPR. Finn's r (Finn, 1970) is an agreement index similar to the ICC that does not have this undesirable property and is recommended in cases in which the distribution of scores may not be normally distributed (Whitehurst, 1984). In this sample, the mean index was .52 (SD = .28).…”
Section: Methodsmentioning
confidence: 85%
“…Many manuscripts are rejected on this criterion, even if the reviewers identify the research as sound and reported effectively. Despite evidence of the unreliability of the review process for evaluation and identifying importance (Bornmann, Mutz, & Daniel, 2010;Cicchetti, 1991;Gottfredson, 1978;Marsh & Ball, 1989;Marsh, Jayasinghe, & Bond, 2008;Peters & Ceci, 1982;Petty, Fleming, & Fabrigar, 1999;Whitehurst, 1984), this is a reasonable criterion given that journals have limited space and desires to be prestigious outlets. However, in the digital age, page limits are an anachronism (Nosek & Bar-Anan, 2012).…”
Section: Journals With Peer Review Standards Focused On the Soundnessmentioning
confidence: 99%
“…The Finn coefficient is recommended when variance between raters is low (Finn 1970). Whitehurst (1984) suggests Finn as an alternative to problems with Kappa, and affirms that it is the most reasonable index for agreement.…”
Section: Analysis Proceduresmentioning
confidence: 99%
“…We adopted Finn coefficient because of problems identified in Kappa coefficient, by data analysis researchers (Whitehurst 1984;Feinstein and Cicchetti 1990;Gwet 2002;Powers 2012). The Kappa test is done in two phases.…”
Section: Analysis Proceduresmentioning
confidence: 99%