1986
DOI: 10.1177/00220345860650020701
|View full text |Cite
|
Sign up to set email alerts
|

Percent Agreement, Pearson's Correlation, and Kappa as Measures of Inter-examiner Reliability

Abstract: Percent agreement and Pearson's correlation coefficient are frequently used to represent inter-examiner reliability, but these measures can be misleading. The use of percent agreement to measure inter-examiner agreement should be discouraged, because it does not take into account the agreement due solely to chance. Caution must be used in the interpretation of Pearson's correlation, because it is unaffected by the presence of any systematic biases. Analyses of data from a reliability study show that even thoug… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
141
0
5

Year Published

1991
1991
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 235 publications
(151 citation statements)
references
References 17 publications
3
141
0
5
Order By: Relevance
“…The inflammatory response of the negative control group was statistically significant (P > 0.05) when compared with Aloe vera or calcium hydroxide at all periods. The intraexaminer performance evaluated by kappa analysis resulted in 0.807, which is considered an excellent agreement with the literature (Hunt, 1986) and perfect agreement (Eklud et al, 1986).…”
Section: Group 3: Control Groupsupporting
confidence: 77%
“…The inflammatory response of the negative control group was statistically significant (P > 0.05) when compared with Aloe vera or calcium hydroxide at all periods. The intraexaminer performance evaluated by kappa analysis resulted in 0.807, which is considered an excellent agreement with the literature (Hunt, 1986) and perfect agreement (Eklud et al, 1986).…”
Section: Group 3: Control Groupsupporting
confidence: 77%
“…1 Additionally, despite high percentage levels of agreement with some adventitious sounds being coupled with lower kappa values, the kappa values should be deemed representative of inter-rater agreement because this considers the influence of chance, whereas percentage agreement does not. 36 However, because kappa values are influenced by the prevalence of the factor being investigated, infrequent occurrences of clinical factors (eg, bronchial breathing) may generate low kappa values but not necessarily be representative of low overall agreement. 37 Despite scant evidence for its clinical benefit, palpable chest-wall fremitus is commonly used by practitioners 38 and has been established as a key clinical indicator of retained secretions throughout management of pulmonary dysfunction in adults receiving mechanical ventilation.…”
Section: Discussionmentioning
confidence: 99%
“…Frequently used indices for inter-examiner agreement are the percentage agreement and Pearson's correlation coefficient. These indices may be misleading, and kappa statistics were therefore chosen (51). The kappa statistic is a measure of the proportion of agreement beyond chance which is actually achieved.…”
Section: Discussionmentioning
confidence: 99%