1982
DOI: 10.1111/j.1365-2559.1982.tb02752.x
|View full text |Cite
|
Sign up to set email alerts
|

An improved method of analysis of observer variation between pathologists

Abstract: An improved method of analysing interobserver variation in histopathological studies is described and illustrated, by use of data from a congruence survey of malignant melanoma. The method provides, between any number of pathologists, an assessment of overall agreement and of agreement on each individual category of a classification system. Adjustment for differences in chance agreement due to varying numbers of categories or an altered composition of cases is included in the analysis. A generalization of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
21
0

Year Published

1986
1986
2003
2003

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(23 citation statements)
references
References 21 publications
(9 reference statements)
2
21
0
Order By: Relevance
“…Reproducibility was expressed either by the crude rate of agreement or by the kappa (k) statistics [16]. Statistical comparisons in agreement between classifications were made using a paired t-test of differences in k statistics.…”
Section: Methodsmentioning
confidence: 99%
“…Reproducibility was expressed either by the crude rate of agreement or by the kappa (k) statistics [16]. Statistical comparisons in agreement between classifications were made using a paired t-test of differences in k statistics.…”
Section: Methodsmentioning
confidence: 99%
“…This provides a numerical measure of chance-cor rected agreement [14,15]. The formula is as follows: k = (Po -Pe)/(1 -Pe) where Po is the crude proportion of agreement and Pe is the chance-explained propor tion of agreement.…”
Section: Methodsmentioning
confidence: 99%
“…Since it is a ratio between the chance-corrected observed agreement and the chancecorrected perfect agreement, k approaches +1 for per fect agreement and -1 for complete disagreement; for a completely chance-explained agreement, k equals 0. Statistical significance of k values is tested by dividing it by its standard error, the ratio being distributed as a standard normal value Z [15], x2 analysis was per formed to estimate homogeneity in the attribution of the diagnostic categories by the raters. Table 2 shows the distribution of the diag nostic categories among the raters as well as the original diagnoses made in the hospital.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…7 An analysis of exchangeability (related to the probability that a second observer might allocate a case to a particular diagnostic category when the first has indicated a different one) between diagnostic categories was also performed (Table II); for this study a negative kappa meant low interchangeability (a high consistency) of the relevant categories.…”
Section: Discussionmentioning
confidence: 99%