1995
DOI: 10.1007/bf02602402
|View full text |Cite
|
Sign up to set email alerts
|

Measuring attending physician performance in a general medicine outpatient clinic

Abstract: Although this evaluation instrument for measuring clinic attending performance must be considered preliminary, this study suggests that relatively few attending evaluations are required to reliably profile an individual attending's performance, that attending identity is associated with a large amount of the scale score variation, and that special issues of attending performance more relevant to the outpatient setting than the inpatient setting (availability in clinic and sensitivity to time efficiency) should… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

1995
1995
2016
2016

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(22 citation statements)
references
References 11 publications
0
22
0
Order By: Relevance
“…Cardiologists utilise a deeper fund of knowledge than general internists, and specialists may be perceived by residents as being more knowledgeable than generalists, possibly explaining why cardiologists performed equally well on the item ÔDemonstrated a broad fund of knowledgeÕ. While teaching medical students and residents, we have noticed that a preponderance of classical physical findings derives from the cardiovascular examination, perhaps explaining why cardiologists performed Although numerous studies of clinical teaching assessment instruments have examined assessment scores using factor analysis, [6][7][8][9]11,[13][14][15][16]19,23,27,28 we are not aware of studies of learner-on-faculty assessments investigating the stability of factors from 1 medical specialty to the next. We are aware, however, of a study showing that assessment items completed by first through fourth year surgical residents reduce to a single-factor model, whereas the same items completed by fifth year surgical residents reduce to a 2-factor model with subscales labelled ÔabilityÕ and Ôinterpersonal skillsÕ.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Cardiologists utilise a deeper fund of knowledge than general internists, and specialists may be perceived by residents as being more knowledgeable than generalists, possibly explaining why cardiologists performed equally well on the item ÔDemonstrated a broad fund of knowledgeÕ. While teaching medical students and residents, we have noticed that a preponderance of classical physical findings derives from the cardiovascular examination, perhaps explaining why cardiologists performed Although numerous studies of clinical teaching assessment instruments have examined assessment scores using factor analysis, [6][7][8][9]11,[13][14][15][16]19,23,27,28 we are not aware of studies of learner-on-faculty assessments investigating the stability of factors from 1 medical specialty to the next. We are aware, however, of a study showing that assessment items completed by first through fourth year surgical residents reduce to a single-factor model, whereas the same items completed by fifth year surgical residents reduce to a 2-factor model with subscales labelled ÔabilityÕ and Ôinterpersonal skillsÕ.…”
Section: Discussionmentioning
confidence: 99%
“…7,8 Numerous studies have evaluated the psychometric characteristics of clinical teaching assessments. These studies included ratings of faculty by students, [8][9][10][11][12][13][14][15][16][17] residents 7,18-20 and peers 21,22 in diverse educational settings, such as internal medicine, 19,[21][22][23][24] family medicine, 14,18,25 emergency medicine, 26 gynaecology 13 and surgery. 10,27,28 However, our review of the literature 5 revealed that although factor analytic studies of clinical teaching assessments abound, [7][8][9]11,[13][14][15][16]19,23,27,28 none have described the factorial stability of teaching assessment scores across different medical specialties.…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, the Stanford educational categories were not often modeled to an outstanding degree, which may have decreased the likelihood that evaluators would consistently agree on ratings. Finally, the estimate of appropriate sample size for this study was based on prior studies, one of which utilized resident evaluations (Hayward et al, 1995).…”
Section: Discussionmentioning
confidence: 99%
“…Our survey instrument has not been previously validated or examined for reliability in other groups. However, it is similar in design to others used to measure residents' attitudes about educational experiences, 17,18 and our feedback questions were specifically designed to address aspects of feedback that other authors had shown to be important. 3,4,6 Lastly, our residents may be atypical in their responses to feedback, and residents in other programs may react differently.…”
Section: Discussionmentioning
confidence: 99%