2008
DOI: 10.1097/acm.0b013e3181637925
|View full text |Cite
|
Sign up to set email alerts
|

The Reported Validity and Reliability of Methods for Evaluating Continuing Medical Education: A Systematic Review

Abstract: The evidence for CME effectiveness is limited by weaknesses in the reported validity and reliability of evaluation methods. Educators should devote more attention to the development and reporting of high-quality CME evaluation methods and to emerging guidelines for establishing the validity of CME evaluation methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
32
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(37 citation statements)
references
References 51 publications
3
32
0
Order By: Relevance
“…On the other hand, teaching in clinical settings usually involves smaller groups of learners and a greater opportunity for group-based discussion, which, in turn, is enhanced by teachers who are approachable and by minimal psychological size differentials between teachers and learners. Systematic reviews of CME program evaluations have demonstrated the widespread use of methods with poor validity evidence, 26,27 a focus on learner outcomes, 28 and neglect of the higher Kirkpatrick outcomes of behavior and results. 27 Our study addresses these limitations by providing a measure of feedback for CME presenters that is based on observed behaviors and is supported by strong validity evidence.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, teaching in clinical settings usually involves smaller groups of learners and a greater opportunity for group-based discussion, which, in turn, is enhanced by teachers who are approachable and by minimal psychological size differentials between teachers and learners. Systematic reviews of CME program evaluations have demonstrated the widespread use of methods with poor validity evidence, 26,27 a focus on learner outcomes, 28 and neglect of the higher Kirkpatrick outcomes of behavior and results. 27 Our study addresses these limitations by providing a measure of feedback for CME presenters that is based on observed behaviors and is supported by strong validity evidence.…”
Section: Discussionmentioning
confidence: 99%
“…16,17 Systematic reviews have documented that validity evidence is infrequently reported. Estimates vary widely depending on the sample, but content, internal structure, and relations with other variables evidence are typically reported in < 40 % of studies, 6,9,13,[18][19][20][21] and response process and consequences are reported in < 10 %. 18 However, the reporting of validity evidence for patient outcomes is unknown.…”
Section: Methodological Issues In Patient Outcomes Studiesmentioning
confidence: 99%
“…22,87,88 The reporting of validity evidence in this sample was even less frequent than in previous reviews in medical education. 6,9,13,[18][19][20][21] We discuss this below. Our findings regarding the prevalence of unit of analysis error mirror those in clinical medicine.…”
Section: Integration With Other Literaturementioning
confidence: 99%
“…We used the same definitions for sources of reliability and validity evidence that were used in a prior medical education review. 8 Evidence sources for reliability included test-retest reliability such as reporting a correlation coefficient of scores from tests taken twice over a period of time by learners, a coefficient for internal consistency such as the Cronbach alpha, and inter-rater reliability such as intraclass correlation. Validity evidence required descriptions and intended purpose of the evaluation method and statistical or psychometric testing of learners from at least 1 evidence source among content, construct, or criterion validity.…”
Section: Methodsmentioning
confidence: 99%