1979
DOI: 10.1016/s0022-5347(17)56572-6
|View full text |Cite
|
Sign up to set email alerts
|

Lack of Agreement Between Subjective Ratings of Instructors and Objective Testing of Knowledge Acquisition in a Urological Continuing Medical Education Course

Abstract: Objective scores from multiple-choice questions before and after a postgraduate course were compared to subjective ratings of the instructors at a 3-day seminar. The objective mean scores after the course were significantly higher than the scores before the course (p less than 0.0001). There was no correlation between test results and subjective ratings of instructors.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

1982
1982
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 2 publications
0
10
0
Order By: Relevance
“…This has previously been identified in the literature as well that experts know a qualified trainee when they see one, but that this is overly subjective in nature and they often cannot explain the ways by which they came to this conclusion. [145][146][147][148] While the task-specific competence outcome, like the task-specific scale (in the Likert scale assessment group) is procedure-specific and rigid, limiting its broad use.…”
Section: Discussionmentioning
confidence: 99%
“…This has previously been identified in the literature as well that experts know a qualified trainee when they see one, but that this is overly subjective in nature and they often cannot explain the ways by which they came to this conclusion. [145][146][147][148] While the task-specific competence outcome, like the task-specific scale (in the Likert scale assessment group) is procedure-specific and rigid, limiting its broad use.…”
Section: Discussionmentioning
confidence: 99%
“…Traditionally, clinical skills have been evaluated by means of multiple–choice tests, oral examinations, ward assessment forms (in training reports) and others forms of written examination. These methods have been shown to be lacking in either reliability or validity [16]. Urology residents are required to perform many nonoperative clinical tasks during their residency and many of these required skills, both diagnostic and therapeutic, are relatively unsupervised by faculty members [16].…”
Section: Discussionmentioning
confidence: 99%
“…These methods have been shown to be lacking in either reliability or validity [16]. Urology residents are required to perform many nonoperative clinical tasks during their residency and many of these required skills, both diagnostic and therapeutic, are relatively unsupervised by faculty members [16]. Faculty members often evaluate residents using subjective performance rating forms, i.e.…”
Section: Discussionmentioning
confidence: 99%
“…Questioning residents and observing their physical examinations of ward or clinical patients is another method to evaluate them and allows supervising physicians to witness actual practice, but surgical decision-making may still (consciously or unconsciously on the part of the resident) follow that particular faculty's decision algorithm; time and interruptions may limit these opportunities; and the attending still holds the ultimate responsibility for making decisions. In addition, subjective faculty evaluations are unreliable 3,4 and tend to inflate resident performance. 5,6 The use of simulated patients (SP) and objective structured clinical examinations (OSCE) is an option that can be used to evaluate resident decision-making by allowing an actual "patient" encounter to be recorded and the actual practice of residents observed yet still allow the residents (all or in part) to feel free of direct observation and the inhibitions that may attend it, thus actively taking the role of primary decision-maker.…”
Section: Introductionmentioning
confidence: 99%