The new questionnaire is short and yet based on a widely-used framework for clinical teaching. The analyses presented here indicate good reliability and validity of the instrument. Future research needs to investigate whether feedback generated from this tool helps to improve teaching quality and student learning outcome.
BackgroundThe seven categories of the Stanford Faculty Development Program (SFDP) represent a framework for planning and assessing medical teaching. Nevertheless, so far there is no specific evaluation tool for large-group lectures that is based on these categories. This paper reports the development and psychometric validation of a short German evaluation tool for large-group lectures in medical education (SETMED-L: ‘Student Evaluation of Teaching in MEDical Lectures’) based on the SFDP-categories.MethodsData were collected at two German medical schools. In Study 1, a full information factor analysis of the new 14-item questionnaire was performed. In Study 2, following cognitive debriefings and adjustments, a confirmatory factor analysis was performed. The model was tested for invariance across medical schools and student gender. Convergent validity was assessed by comparison with results of the FEVOR questionnaire.ResultsStudy 1 (n = 922) yielded a three-factor solution with one major (10 items) and two minor factors (2 items each). In Study 2 (n = 2740), this factor structure was confirmed. Scale reliability ranged between α = 0.71 and α = 0.88. Measurement invariance was given across student gender but not across medical schools. Convergent validity in the subsample tested (n = 246) yielded acceptable results.ConclusionThe SETMED-L showed satisfactory to very good psychometric characteristics. The main advantages are its short yet comprehensive form, the integration of SFDP-categories and its focus on medical education.Electronic supplementary materialThe online version of this article (doi:10.1186/s12909-017-0970-8) contains supplementary material, which is available to authorized users.
Teachers appreciated the individual feedback provided by the evaluation tool and stated that they wanted to improve their teaching, based on the results; however, they missed most of the preparative communication. Students were unsure about the additional benefit of the instrument compared with traditional evaluation tools. A majority was unwilling to complete evaluation forms in their spare time, and some felt that the new questionnaire was too long and that the evaluations occurred too often. They were particularly interested in feedback on how their comments have helped to further improve teaching. Student evaluations of teaching can provide useful feedback CONCLUSION: Despite evidence of the utility of the tool for individual teachers, implementation of changes to the process of evaluation appears to have been suboptimal, mainly owing to a perceived lack of communication. In order to motivate students to provide evaluation data, feedback loops including aims and consequences should be established.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.