2017
DOI: 10.1177/0961203317706558
|View full text |Cite
|
Sign up to set email alerts
|

Inter-observer variability of the histological classification of lupus glomerulonephritis in children

Abstract: The gold standard for the classification of lupus nephritis is renal histology but reporting variation exists. The aim of this study was to assess the inter-observer variability of the 2003 International Society of Nephrology/Royal Pathology Society (ISN/RPS) lupus nephritis histological classification criteria in children. Histopathologists from a reference centre and three tertiary paediatric centres independently reviewed digitalized renal histology slides from 55 children with lupus nephritis. Histological… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
26
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 28 publications
(42 reference statements)
0
26
0
Order By: Relevance
“…Indeed, any medical imaging technique involving human reporting is conditioned by a significant interpretative subjectivity. This poor reproducibility has been repeatedly documented in the old kidney histopathological literature [26][27][28][29] and has been recently confirmed in studies concerning the application of Deep Learning in kidney pathology 1,2 . The indices of agreement between human pathologists vary according to the metrics used, the preparation of the specimen, the histological parameters assessed, and the kidney disease considered; overall, reported agreement rate between human kidney pathologists is fair to moderate with agreement ratio ranging between 0.3 and 0.6.…”
Section: Discussionmentioning
confidence: 80%
“…Indeed, any medical imaging technique involving human reporting is conditioned by a significant interpretative subjectivity. This poor reproducibility has been repeatedly documented in the old kidney histopathological literature [26][27][28][29] and has been recently confirmed in studies concerning the application of Deep Learning in kidney pathology 1,2 . The indices of agreement between human pathologists vary according to the metrics used, the preparation of the specimen, the histological parameters assessed, and the kidney disease considered; overall, reported agreement rate between human kidney pathologists is fair to moderate with agreement ratio ranging between 0.3 and 0.6.…”
Section: Discussionmentioning
confidence: 80%
“…Thirdly, a single histological scoring system was used, and it is possible that using different scoring systems may have led to different results. Fourthly, pathology slides were not re‐read by a single histopathologist; it is possible that interpathologist interpretations , even amongst specialist renal pathologists , may vary sufficiently to mask an association between chronic changes and graft outcome. However, multiple histopathologists would be necessary to provide a ‘round‐the‐clock’ PIKB service, and therefore our analysis is pragmatic.…”
Section: Discussionmentioning
confidence: 99%
“…and visual quantification by pathologists are time-consuming and limited by poor intra-and interreader reproducibility. [4][5][6][7] The introduction of digital pathology in nephrology clinical trials 8 has provided an unprecedented opportunity to test machine learning approaches for large-scale tissue quantification efforts. Standardization of pathology material acquisition has allowed worldwide consortia to establish digital pathology repositories containing thousands of digital renal biopsies for the evaluation of kidney diseases in adults and children, across diverse populations and pathology laboratories.…”
Section: Translational Statementmentioning
confidence: 99%