Progress testing was introduced in 1999 at the Charité-Universitätsmedizin Berlin. This Berlin progress test medizin (PTM) started to cooperate with other Medical Schools in 2000. The cooperation grew continuously and now 13 Medical schools in Germany and Austria take part, including more than 8500 Students. This article focuses on the concept and quality of the PTM and the benefits for students and medical schools. It shows how an initial small student initiative has developed into a successful international cooperation of formative testing in medical education.
IntroductionMini Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) are used as formative assessments worldwide. Since an up-to-date comprehensive synthesis of the educational impact of Mini-CEX and DOPS is lacking, we performed a systematic review. Moreover, as the educational impact might be influenced by characteristics of the setting in which Mini-CEX and DOPS take place or their implementation status, we additionally investigated these potential influences.MethodsWe searched Scopus, Web of Science, and Ovid, including All Ovid Journals, Embase, ERIC, Ovid MEDLINE(R), and PsycINFO, for original research articles investigating the educational impact of Mini-CEX and DOPS on undergraduate and postgraduate trainees from all health professions, published in English or German from 1995 to 2016. Educational impact was operationalized and classified using Barr’s adaptation of Kirkpatrick’s four-level model. Where applicable, outcomes were pooled in meta-analyses, separately for Mini-CEX and DOPS. To examine potential influences, we used Fisher’s exact test for count data.ResultsWe identified 26 articles demonstrating heterogeneous effects of Mini-CEX and DOPS on learners’ reactions (Kirkpatrick Level 1) and positive effects of Mini-CEX and DOPS on trainees’ performance (Kirkpatrick Level 2b; Mini-CEX: standardized mean difference (SMD) = 0.26, p = 0.014; DOPS: SMD = 3.33, p<0.001). No studies were found on higher Kirkpatrick levels. Regarding potential influences, we found two implementation characteristics, “quality” and “participant responsiveness”, to be associated with the educational impact.ConclusionsDespite the limited evidence, the meta-analyses demonstrated positive effects of Mini-CEX and DOPS on trainee performance. Additionally, we revealed implementation characteristics to be associated with the educational impact. Hence, we assume that considering implementation characteristics could increase the educational impact of Mini-CEX and DOPS.
In medical education, the effect of the educational environment on student achievement has primarily been investigated in comparisons between traditional and problem-based learning (PBL) curricula. As many of these studies have reached no clear conclusions on the superiority of the PBL approach, the effect of curricular reform on student performance remains an issue. We employed a theoretical framework that integrates antecedents of student achievement from various psychosocial domains to examine how students interact with their curricular environment. In a longitudinal study with N = 1,646 participants, we assessed students in a traditional and a PBL-centered curriculum. The measures administered included students' perception of the learning environment, self-efficacy beliefs, positive study-related affect, social support, indicators of self-regulated learning, and academic achievement assessed through progress tests. We compared the relations between these characteristics in the two curricular environments. The results are two-fold. First, substantial relations of various psychosocial domains and their associations with achievement were identified. Second, our analyses indicated that there are no substantial differences between traditional and PBL-based curricula concerning the relational structure of psychosocial variables and achievement. Drawing definite conclusions on the role of curricular-level interventions in the development of student's academic achievement is constrained by the quasi-experimental design as wells as the selection of variables included. However, in the specific context described here, our results may still support the view of student activity as the key ingredient in the acquisition of achievement and performance.
Progress testing as a longitudinal method allows us to better understand the development of knowledge during formal undergraduate education. The main difference between traditional and problem-based medical education seems to be provoked by the high-stakes national examination undertaken in the traditional course (the Physikum).
CONTEXT Basic science teaching in undergraduate medical education faces several challenges. One prominent discussion is focused on the relevance of biomedical knowledge to the development and integration of clinical knowledge. Although the value of basic science knowledge is generally emphasised, theoretical positions on the relative role of this knowledge and the optimal approach to its instruction differ. The present paper addresses whether and to what extent biomedical knowledge is related to the development of clinical knowledge. METHODSWe analysed repeated-measures data for performances on basic science and clinical knowledge assessments. A sample of 598 medical students on a traditional curriculum participated in the study. The entire study covered a developmental phase of 2 years of medical education. Structural equation modelling was used to analyse the temporal relationship between biomedical knowledge and the acquisition of clinical knowledge. RESULTSAt the point at which formal basic science education ends and clinical training begins, students show the highest levels of biomedical knowledge. The present data suggest a decline in basic science knowledge that is complemented by a growth in clinical knowledge. Statistical comparison of several structural equation models revealed that the model to best explain the data specified unidirectional relationships between earlier states of biomedical knowledge and subsequent changes in clinical knowledge. However, the parameter estimates indicate that this association is negative.DISCUSSION Our analysis suggests a negative relationship between earlier levels of basic science knowledge and subsequent gains in clinical knowledge. We discuss the limitations of the present study, such as the educational context in which it was conducted and its nonexperimental nature. Although the present results do not necessarily contradict the relevance of basic sciences, we speculate on mechanisms that might be related to our findings. We conclude that our results hint at possibly critical issues in basic science education that have been rarely addressed thus far.
The Berlin Progress Test has grown to a cooperation of 13 universities. Recently, comparisons between the participating schools became an area of high interest. Muijtjens et al. [Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, Thoben AJNM, van der Vleuten CPM. 2008a. Benchmarking by cross-institutional comparison of student achievement in a progress test. Med Educ 41(1):82-88; Muijtjens AM, Schuwirth LWT, Cohen-Schotanus J, van der Vleuten CPM. 2008b. Differences in knowledge development exposed by multi-curricular progress test data. Adv Health Sci Educ 13:593-605] proposed a method for cross-institutional benchmarking based on progress test data. Progress testing has some major advantages as it delivers longitudinal information about student's growth of knowledge. By adopting the procedure of Muijtjens et al. ( 2008a , b), we were able to replicate the basic characteristics of the cumulative deviation method. Besides the advantages of the method, there are some difficulties as errors of measurement are not independent, which violates the premises of testing statistical differences.
Despite the frequent use of state-of-the-art psychometric models in the field of medical education, there is a growing body of literature that questions their usefulness in the assessment of medical competence. Essentially, a number of authors raised doubt about the appropriateness of psychometric models as a guiding framework to secure and refine current approaches to the assessment of medical competence. In addition, an intriguing phenomenon known as case specificity is specific to the controversy on the use of psychometric models for the assessment of medical competence. Broadly speaking, case specificity is the finding of instability of performances across clinical cases, tasks, or problems. As stability of performances is, generally speaking, a central assumption in psychometric models, case specificity may limit their applicability. This has probably fueled critiques of the field of psychometrics with a substantial amount of potential empirical evidence. This article aimed to explain the fundamental ideas employed in psychometric theory, and how they might be problematic in the context of assessing medical competence. We further aimed to show why and how some critiques do not hold for the field of psychometrics as a whole, but rather only for specific psychometric approaches. Hence, we highlight approaches that, from our perspective, seem to offer promising possibilities when applied in the assessment of medical competence. In conclusion, we advocate for a more differentiated view on psychometric models and their usage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.