This study suggests that participation mediates the relationships between motivation and learning strategies, and medical school performance. However, participation and self-efficacy beliefs also made unique contributions towards performance. Encouraging participation and strengthening self-efficacy may help to enhance medical student performance.
The educational environment has been increasingly acknowledged as vital for high-quality medical education. As a result, several instruments have been developed to measure medical educational environment quality. However, there appears to be no consensus about which concepts should be measured. The absence of a theoretical framework may explain this lack of consensus. Therefore, we aimed to (1) find a comprehensive theoretical framework defining the essential concepts, and (2) test its applicability. An initial review of the medical educational environment literature indicated that such frameworks are lacking. Therefore, we chose an alternative approach to lead us to relevant frameworks from outside the medical educational field; that is, we applied a snowballing technique to find educational environment instruments used to build the contents of the medical ones and investigated their theoretical underpinnings (Study 1). We found two frameworks, one of which was described as incomplete and one of which defines three domains as the key elements of human environments (personal development/goal direction, relationships, and system maintenance and system change) and has been validated in different contexts. To test its applicability, we investigated whether the items of nine medical educational environment instruments could be mapped unto the framework (Study 2). Of 374 items, 94% could: 256 (68%) pertained to a single domain, 94 (25%) to more than one domain. In our context, these domains were found to concern goal orientation, relationships and organization/regulation. We conclude that this framework is applicable and comprehensive, and recommend using it as theoretical underpinning for medical educational environment measures.
The GRAS is a practical measurement instrument that yields reliable data that contribute to valid inferences about the personal reflection ability of medical students and doctors, both at individual and group level.
BackgroundReflection on experience is an increasingly critical part of professional development and lifelong learning. There is, however, continuing uncertainty about how best to put principle into practice, particularly as regards assessment. This article explores those uncertainties in order to find practical ways of assessing reflection.DiscussionWe critically review four problems: 1. Inconsistent definitions of reflection; 2. Lack of standards to determine (in)adequate reflection; 3. Factors that complicate assessment; 4. Internal and external contextual factors affecting the assessment of reflection.SummaryTo address the problem of inconsistency, we identified processes that were common to a number of widely quoted theories and synthesised a model, which yielded six indicators that could be used in assessment instruments. We arrived at the conclusion that, until further progress has been made in defining standards, assessment must depend on developing and communicating local consensus between stakeholders (students, practitioners, teachers, supervisors, curriculum developers) about what is expected in exercises and formal tests. Major factors that complicate assessment are the subjective nature of reflection's content and the dependency on descriptions by persons being assessed about their reflection process, without any objective means of verification. To counter these validity threats, we suggest that assessment should focus on generic process skills rather than the subjective content of reflection and where possible to consider objective information about the triggering situation to verify described reflections. Finally, internal and external contextual factors such as motivation, instruction, character of assessment (formative or summative) and the ability of individual learning environments to stimulate reflection should be considered.
A top pre-university grade point average was the best predictor of performance. For so-called non-academic performance, the multifaceted selection process was efficient in identifying applicants with suitable skills. Participation in the multifaceted selection procedure seems to be predictive of higher performance. Further research is needed to assess whether our results are generalisable to other medical schools.
Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis literature, over often employed limited applications of factor analysis. We demonstrate the essential steps, focusing on the Postgraduate Hospital Educational Environment Measure (PHEEM). Method: The PHEEM was completed by 279 clerks. We performed Principal Component Analysis (PCA) with varimax rotation. A combination of three psychometric criteria was applied: scree plot, eigenvalues >1.5 and a minimum percentage of additionally explained variance of approximately 5%. Furthermore, four interpretability criteria were used. Confirmatory factor analysis was performed to verify the original scale structure. Results: Our method yielded three interpretable and practically useful dimensions: learning content and coaching, beneficial affective climate and external regulation. Additionally, combining several criteria reduced the risk of overfactoring and underfactoring. Furthermore, the resulting dimensions corresponded with three learning functions essential to high-quality learning, thus strengthening our findings. Confirmatory factor analysis disproved the original scale structure. Conclusions: Our sophisticated approach yielded several advantages over methods applied in previous validation studies. Therefore, we recommend this method in validation studies to achieve best practice.
Four hypotheses potentially explaining the effect of active learning on graduation rate and study duration were considered: (i) active-learning curricula promote the social and academic integration of students; (ii) active-learning curricula attract brighter students; (iii) active-learning curricula retain more poor students, and (iv) the active engagement of students with their study required by active-learning curricula induces better academic performance and, hence, lower dropout rates. The first three hypotheses had to be rejected. It was concluded that the better-learning hypothesis provides the most parsimonious account for the data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.