1983
DOI: 10.1093/oxfordjournals.aje.a113633
|View full text |Cite
|
Sign up to set email alerts
|

Reliability of Ophthalmic Diagnoses in an Epidemiologic Survey1

Abstract: In the Nepal Blindness Survey, 39,887 people in 105 sites were examined by 10 ophthalmologists from Nepal and four other countries during 1981. Ophthalmic protocols were pretested on approximately 3000 subjects; however, interobserver variability was inevitable. To quantify the amount of variability and assess the reliability of important ophthalmic measures, a study of interobserver agreement was conducted. Five ophthalmologists, randomly assigned to one of two examining stations in a single survey site, carr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

1985
1985
2017
2017

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The substantial inter-observer variation among examiners reflects the subjective nature of the clinical exam ( Table 2). The level of observer experience, 10,11 complexity in grading scheme, 12 and possible diagnostic drift in individual observers 11 are factors that have influenced the interobserver agreement in past studies. However, the more objective PCR assay shows agreement between assays (k = 0.98) of a different order than has ever been found with the clinical exam.…”
Section: Discussionmentioning
confidence: 99%
“…The substantial inter-observer variation among examiners reflects the subjective nature of the clinical exam ( Table 2). The level of observer experience, 10,11 complexity in grading scheme, 12 and possible diagnostic drift in individual observers 11 are factors that have influenced the interobserver agreement in past studies. However, the more objective PCR assay shows agreement between assays (k = 0.98) of a different order than has ever been found with the clinical exam.…”
Section: Discussionmentioning
confidence: 99%
“…[11][12][13] Equipment such as slit lamps have been used, as well as more basic equipment such as torches, and  2.5 loupes. 8,14 The availability of an accurate and cheap screening tool that could be easily used by non-ophthalmologists would potentially be of great use in cataract detection.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, in 1971, Kappa was modified by Fleiss to allow the reproducibility's measurement in cases where several (more than two) observers are judging cases. During the last decades, several papers have appeared in the medical and psychological literature, which discuss Kappa indexes for measuring agreement between two or more raters (Brilliant, Lepowski, & Musch, 1983;Kraemer, 1992;Krummenauer, 2000;Little, Worthingham-Roberts, & Mann, 1984;Posner, Sampson, Caplan, Ward, & Cheney, 1990;Robert & McNaemee, 1998). However, the Fleiss index for multiple raters is unweighted (as the original Kappa presented by Cohen [1960]); hence, it treats all disagreements equally.…”
Section: Discussionmentioning
confidence: 99%