1996
DOI: 10.1136/jcp.49.10.833
|View full text |Cite
|
Sign up to set email alerts
|

Interobserver variation in the reporting of cervical colposcopic biopsy specimens: comparison of grading systems.

Abstract: Aims-To assess interobserver variation in reporting cervical colposcopic biopsy specimens and to determine whether a modified Bethesda grading system results in better interobserver agreement than the traditional cervical intraepithelial neoplasia (CIN) grading system. Methods-One hundred and twenty five consecutive cervical colposcopic biopsy specimens were assessed independently by six histopathologists. Specimens were classified using the traditional CIN grading system as normal, koilocytosis, CIN I, CIN II… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
56
0
3

Year Published

1999
1999
2017
2017

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 102 publications
(63 citation statements)
references
References 8 publications
4
56
0
3
Order By: Relevance
“…We show moderate interobserver diagnostic reproducibility by a mixed group of 19 pathologists evaluating HPV related lesions of the cervix. Our results are in accordance with previously reported CIN diagnosis inter-observer reproducibility's ranging from poor to good (kappa 0.23-0.64) (2,3,(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) and likewise CIN 2 has the lowest interobserver diagnostic reproducibility (1-4, 7-11, 13-15, 21-23). Some pathologists who report results in the CIN system have reduced used of the CIN2 diagnostic category to such a low frequency that in their hands it becomes a de facto 2-class system.…”
Section: Interobserver Reproducibility Of Patient Management Among Gysupporting
confidence: 82%
“…We show moderate interobserver diagnostic reproducibility by a mixed group of 19 pathologists evaluating HPV related lesions of the cervix. Our results are in accordance with previously reported CIN diagnosis inter-observer reproducibility's ranging from poor to good (kappa 0.23-0.64) (2,3,(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) and likewise CIN 2 has the lowest interobserver diagnostic reproducibility (1-4, 7-11, 13-15, 21-23). Some pathologists who report results in the CIN system have reduced used of the CIN2 diagnostic category to such a low frequency that in their hands it becomes a de facto 2-class system.…”
Section: Interobserver Reproducibility Of Patient Management Among Gysupporting
confidence: 82%
“…37 A value Ͼ0.75 indicates excellent agreement, 0.4 to 0.75 indicates fair to good agreement, and Ͻ0.4 indicates poor agreement. 38 …”
Section: Item Reliabilitymentioning
confidence: 99%
“…As expected, classification agreement with lower variability between observers can be improved in a 2-tiered versus a 3-tiered system [10,28,[78][79][80][81][82][83][84][85][86][87]. Improved agreement among pathologists leads to a more consistent and reproducible diagnosis, which may lead to more valid clinical outcome data.…”
Section: Wg2 Recommendation Nomentioning
confidence: 99%