2004
DOI: 10.1177/0165551504041673
|View full text |Cite
|
Sign up to set email alerts
|

The Reliability of Peer Reviews of Papers on Information Systems

Abstract: This paper analyses the reliability of the double-blind peer review systems used for submissions to the 2001 and 2002 UK Academy for Information Systems (UKAIS) conferences. The level of reliability found in the first conference was marginally lower than would be expected from a model based on chance. In the second conference the reliability level was significantly better, but still low. The paper explores some of the implications of this for the reviewing system, and suggests a model for assessing the impact … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
21
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 17 publications
1
21
0
Order By: Relevance
“…Values of Kappa between 0 and 0.2 indicate only slight agreement. The only paper we found in the field of information science [24] also reported low levels of reliability in two conferences: one conference had kappa = -0.04, the other had kappa = 0.30.…”
Section: Related Researchmentioning
confidence: 77%
See 1 more Smart Citation
“…Values of Kappa between 0 and 0.2 indicate only slight agreement. The only paper we found in the field of information science [24] also reported low levels of reliability in two conferences: one conference had kappa = -0.04, the other had kappa = 0.30.…”
Section: Related Researchmentioning
confidence: 77%
“…However, the majority of studies have looked at peer review of journal or conference papers (see, for example, [24], [20]) or the extent to which reviewers agree on whether to accept or reject research grant applications or research fellowships (see, for example, [16]). …”
Section: Related Researchmentioning
confidence: 99%
“…In developing our rating procedures, we look for guidance from the literature examining peer review of manuscripts (see, e.g., Strayhorn, McDermott, and Tanguay 1993, van Rooyen, Black, and Godlee 1999, Wood, Roberts and Howell 2004). In our study, raters are expected to provide objective assessments of the quality of the answers.…”
Section: Analysis and Resultsmentioning
confidence: 99%
“…Values of Kappa between 0 and 0.2 indicate only slight agreement. The only paper addressing peer review that we found in the field of information science [45] also reported low levels of reliability in reviewing performed for two conferences: one conference had Kappa = -0.04, the other had Kappa = 0.30.…”
Section: Related Researchmentioning
confidence: 82%