2002
DOI: 10.1302/0301-620x.84b7.0840950
|View full text |Cite
|
Sign up to set email alerts
|

Improved interobserver variation after training of doctors in the Neer system

Abstract: We investigated whether training doctors to classify proximal fractures of the humerus according to the Neer system could improve interobserver agreement. Fourteen doctors were randomised to two training sessions, or to no training, and asked to categorise 42 unselected pairs of plain radiographs of fractures of the proximal humerus according to the Neer system. The mean kappa difference between the training and control groups was 0.30 (95% CI 0.10 to 0.50, p = 0.006). In the training group the mean kappa valu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2004
2004
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Other authors 6,9,13,14 have shown that intraobserver reliability, levels of clinical expertise, the number of fracture categories, and pretraining of observers may also impact on the validity of a classification, but not to the extent of changing the overall superior results revealed by our kappa analysis. Clinical implications.…”
Section: Discussionmentioning
confidence: 74%
See 1 more Smart Citation
“…Other authors 6,9,13,14 have shown that intraobserver reliability, levels of clinical expertise, the number of fracture categories, and pretraining of observers may also impact on the validity of a classification, but not to the extent of changing the overall superior results revealed by our kappa analysis. Clinical implications.…”
Section: Discussionmentioning
confidence: 74%
“…[6][7][8][9][10][11][12][13][14] There are other classifications 8,[15][16][17] but they are cumbersome, inaccurate, and for the most part, do not contain any underlying concept of the mechanism of injury which may be helpful in understanding the disparate patterns of bony displacement which characterise this injury.…”
mentioning
confidence: 99%
“…The literature regarding inter-observer agreement of fracture classifications appears to converge on the following paradigm: low inter-observer agreement is largely caused by compromised interpretation of the imaging, which is caused by imprecise measurements of the pathoanatomy. Moreover, Neer described patient, procedural, and clinical variability as causes of these imprecise measurements [ 22 ], and a few studies have improved precision through education [ 6 , 8 , 16 ]. Important to remember is that "the 4-segment classification is not a radiographic system but is a pathoanatomic classification of fracture displacement [ 22 ]."…”
Section: Discussionmentioning
confidence: 99%
“…Previous imaging studies in other areas found that training resulted in marked improvement in interobserver agreement (Bankier et al., 1999; Berg et al., 2002; Brorson et al., 2002; Lujan et al., 2008; Magnan and Maklebust, 2009; Patel et al., 2007).…”
Section: Introductionmentioning
confidence: 96%