Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude -not just, does a treatment affect people, but how much does it affect them.-Gene V. Glass
1The primary product of a research inquiry is one or more measures of effect size, not P values.-Jacob Cohen 2 T hese statements about the importance of effect sizes were made by two of the most influential statistician-researchers of the past half-century. Yet many submissions to Journal of Graduate Medical Education omit mention of the effect size in quantitative studies while prominently displaying the P value. In this paper, we target readers with little or no statistical background in order to encourage you to improve your comprehension of the relevance of effect size for planning, analyzing, reporting, and understanding education research studies.
What Is Effect Size?In medical education research studies that compare different educational interventions, effect size is the magnitude of the difference between groups. The absolute effect size is the difference between the average, or mean, outcomes in two different intervention groups. For example, if an educational intervention resulted in the improvement of subjects' examination scores by an average total of 15 of 50 questions as compared to that of another intervention, the absolute effect size is 15 questions or 3 grade levels (30%) better on the examination. Absolute effect size does not take into account the variability in scores, in that not every subject achieved the average outcome.In another example, residents' self-assessed confidence in performing a procedure improved an average of 0.4 point on a Likert-type scale ranging from 1 to 5, after simulation training. While the absolute effect size in the first example appears clear, the effect size in the second example is less apparent. Is a 0.4 change a lot or trivial? Accounting for variability in the measured improvement may aid in interpreting the magnitude of the change in the second example.Thus, effect size can refer to the raw difference between group means, or absolute effect size, as well as standardized measures of effect, which are calculated to transform the effect to an easily understood scale. Absolute effect size is useful when the variables under study have intrinsic meaning (eg, number of hours of sleep). Calculated indices of effect size are useful when the measurements have no intrinsic meaning, such as numbers on a Likert scale; when studies have used different scales so no direct comparison is possible; or when effect size is examined in the context of variability in the population under study.Calculated effect sizes can also quantitatively compare results from different studies and thus are commonly used in meta-analyses.
Why Report Effect Sizes?The effect size is the main finding of a quantitative study. While a P value can inform the reader whether an effect exists, the P value will not reveal the size of the effect. In reporting and interpreting stu...
Background
To further evolve in an evidence-based fashion, medical education needs to develop and evaluate new practices for teaching, learning, and assessment. However, educators face barriers in designing, conducting, and publishing education research.
Objective
To explore the barriers medical educators face in formulating, conducting, and publishing high-quality medical education research, and to identify strategies for overcoming them.
Methods
A consensus workshop was held November 5, 2013, at the Association of American Medical Colleges annual meeting. A working group of education research experts and educators completed a preconference literature review focusing on barriers to education research. During the workshop, consensus-based and small group techniques were used to refine the broad themes into content categories. Attendees then ranked the most important barriers and strategies for overcoming them with the highest potential impact.
Results
Barriers participants faced in conducting quality education research included lack of (1) expertise, (2) time, (3) funding, (4) mentorship, and (5) reward. The strategy considered most effective in overcoming these barriers involved building communities of education researchers for collaboration and networking, and advocating for education researchers' interests. Other suggestions included trying to secure increased funding opportunities, developing mentoring programs, and encouraging mechanisms to ensure protected time.
Conclusions
Barriers to education research productivity clearly exist. Many appear to result from feelings of isolation that may be overcome with systemic efforts to develop and enable communities of practice across institutions. Finally, the theme of “reward” is novel and complex and may have implications for education research productivity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.