1989
DOI: 10.1111/j.1699-0463.1989.tb00464.x
|View full text |Cite
|
Sign up to set email alerts
|

Reproducibility of histomorphologic diagnoses with special reference to the kappa statistic

Abstract: of histomorphologic diagnoses with special reference to the kappa statistic. APMIS 97: [689][690][691][692][693][694][695][696][697][698] 1989.Systems for classification and grading used in pathology should ideally be biologically meaningful and at least be reproducible from one pathologist to another. A statistical method to evaluate reproducibility (non-chance agreement) for several observers using nominal or ordinal categories has been developed and refined over the past few decadesthe kappa statistic. A hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
209
0
6

Year Published

1991
1991
2010
2010

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 411 publications
(229 citation statements)
references
References 16 publications
(3 reference statements)
5
209
0
6
Order By: Relevance
“…The present evaluation documented a substantial intraobserver reproducibility, κ = 0.62, and a moderate interobserver reproducibility, κ = 0.59, representing an acceptable reproducibility (Landis and Koch, 1977;Svanholm et al, 1989). Other workers have previously reported substantial intraobserver reproducibility, κ = 0.66, by using only two grades (Fox et al, 1997).…”
Section: Reproducibility Of Vascular Gradingsupporting
confidence: 72%
“…The present evaluation documented a substantial intraobserver reproducibility, κ = 0.62, and a moderate interobserver reproducibility, κ = 0.59, representing an acceptable reproducibility (Landis and Koch, 1977;Svanholm et al, 1989). Other workers have previously reported substantial intraobserver reproducibility, κ = 0.66, by using only two grades (Fox et al, 1997).…”
Section: Reproducibility Of Vascular Gradingsupporting
confidence: 72%
“…Interobserver agreement between readers 1 and 2 was assessed by a weighted -test. The -values were considered slight for Յ 0.2, fair for ϭ 0.21-0.4, moderate for ϭ 0.41-0.6, substantial for ϭ 0.61-0.8, and almost perfect for ϭ 0.81-1.00 (21). In addition, agreement between both readers was assessed using the Spearman correlation coefficient.…”
Section: Resultsmentioning
confidence: 99%
“…The interobserver reproducibility of density score was analyzed using a generalized statistic. 12 The average density score of the 6 radiologists was categorized (Յ1.50 as lucent, 1.51-2.50 as intermediate and Ͼ2.50 as dense) and used in further analysis.…”
Section: Assessment Of Breast Density and Data Regarding Use Of Hrtmentioning
confidence: 99%