2003
DOI: 10.1097/00000478-200306000-00012
|View full text |Cite
|
Sign up to set email alerts
|

International Variation in Histologic Grading Is Large, and Persistent Feedback Does Not Improve Reproducibility

Abstract: Histologic grading systems are used to guide diagnosis, therapy, and audit on an international basis. The reproducibility of grading systems is usually tested within small groups of pathologists who have previously worked or trained together. This may underestimate the international variation of scoring systems. We therefore evaluated the reproducibility of an established system, the Banff classification of renal allograft pathology, throughout Europe. We also sought to improve reproducibility by providing ind… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

10
178
2
2

Year Published

2009
2009
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 197 publications
(200 citation statements)
references
References 12 publications
10
178
2
2
Order By: Relevance
“…The reported interobserver agreement for visual assessment of tubular atrophy and interstitial fibrosis using routine stains applied in nephropathology is quite variable. 7,35,36 The reproducible approach reported in the Oxford IgAN classification, 37 in which tubular atrophy and interstitial fibrosis are combined into one grading system, therefore seems most practical.…”
Section: Tubulointerstitial Lesionsmentioning
confidence: 99%
“…The reported interobserver agreement for visual assessment of tubular atrophy and interstitial fibrosis using routine stains applied in nephropathology is quite variable. 7,35,36 The reproducible approach reported in the Oxford IgAN classification, 37 in which tubular atrophy and interstitial fibrosis are combined into one grading system, therefore seems most practical.…”
Section: Tubulointerstitial Lesionsmentioning
confidence: 99%
“…Many biopsies are assigned the ambiguous ''borderline'' designation: In our recent study of 403 biopsies, 35 biopsies were diagnosed TCMR but 40 were called borderline (11). Interobserver agreement is poor (12,13), and certain diagnostic rules may be incorrect, for example the designation of all ''isolated v-lesions'' as TCMR (11,14). Central histology assessment is not the solution: It is simply a second opinion using the same standard.…”
Section: Introductionmentioning
confidence: 99%
“…This reflects the problem of ''reference standard-related bias'': a new test, whether based on a molecular classifier or the opinion of a second pathologist, will not agree perfectly with the existing gold standard (the opinion of a different pathologist) when there is a high degree of subjectivity involved in the assignment of the gold standard. ''Noise'' within histology is intrinsic, due to the poor kappa values for TCMR lesions, for example, tubulitis 0.17, where complete agreement is 1.0 and random is 0.0 (12,13). For the present, the clearest value of the TCMR score will be in problematic cases, and its limited ability to provide information about diagnoses such as recurrent diseases must be acknowledged.…”
mentioning
confidence: 99%
“…However, this is too simple an assumption in view of other investigations that report significant intra-and inter-observer variation among veterinary pathologists in grading canine cutaneous mast cell tumors (Northrup et al, 2005) or intestinal tissues (Willard et al, 2002). Neither is this problem unique to veterinary pathology; it is discussed in human pathology as well (Furness et al, 2003). Pathology and radiology relying heavily on visual interpretation are the two fields of medicine with the lowest diagnostic error rate reproducibility of diagnosis (Berner and Graber, 2008).…”
Section: Introductionmentioning
confidence: 99%