1983
DOI: 10.1007/bf00360790
|View full text |Cite
|
Sign up to set email alerts
|

Observer variation in skeletal radiology

Abstract: The factors that affect observer variation in bone radiology are analysed from data in the literature and on the basis of studies carried out at McMaster University on the hands and sacroiliac joints. A plea is made for presenting results in terms of Kappa statistics so that agreement due purely to chance is eliminated. In the conclusions the main variables that affect concordance are listed so that strategies can be developed to reduce observer variation. This is important in serial studies to ensure that the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

1984
1984
2015
2015

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 31 publications
0
11
0
Order By: Relevance
“…Consistency is mandatory if a scoring system is to be used for monitoring the progression of a disease and the effects of therapy. Since interobserver variation is generally greater than intraobserver variation, assessment of the former can be used as a test for diagnostic value (4,13).…”
Section: Grading Systems and Observer Variationmentioning
confidence: 99%
See 1 more Smart Citation
“…Consistency is mandatory if a scoring system is to be used for monitoring the progression of a disease and the effects of therapy. Since interobserver variation is generally greater than intraobserver variation, assessment of the former can be used as a test for diagnostic value (4,13).…”
Section: Grading Systems and Observer Variationmentioning
confidence: 99%
“…Perception is influenced by the examination technique, psychologic factors and experience (10). In order to keep variation in the judgement of a feature low, it is essential to exclude vague or indefinite findings; features must be strictly defined to avoid semantic confusion (4,13). Grading of a feature can be facilitated if the different radiologic changes can be measured quantitatively.…”
Section: Grading Systems and Observer Variationmentioning
confidence: 99%
“…Studies have also demonstrated that training can improve the performance of examiners. 38,39 We are convinced that our examiners demonstrated improvement in performance as evidenced by comparing the inter-rater reliability between the third (experienced) examiner and the 2 primary examiners for the first (κ w =0.37, 0.39) and the second sessions (κ w =0.57, 0.68). This may be explained by the additional training before the second session.…”
Section: Discussionmentioning
confidence: 71%
“…On the other hand, this can be the source of systematic errors [7]. Special training and greater professional experience can lead to greater diagnostic reliability [3]. The influence of the latter could not be excluded in our study.…”
Section: Discussionmentioning
confidence: 95%