PurposeAssessment of the Core Entrustable Professional Activities for Entering Residency (Core EPAs) requires direct observation of learners in the workplace to support entrustment decisions. The purpose of this study was to examine the internal structure validity evidence of the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) scale when used to assess medical student performance in the Core EPAs across clinical clerkships.
Background
Analytic thinking skills are important to the development of physicians. Therefore, educators and licensing boards utilize multiple-choice questions (MCQs) to assess these knowledge and skills. MCQs are written under two assumptions: that they can be written as higher or lower order according to Bloom’s taxonomy, and students will perceive questions to be the same taxonomical level as intended. This study seeks to understand the students’ approach to questions by analyzing differences in students’ perception of the Bloom’s level of MCQs in relation to their knowledge and confidence.
Methods
A total of 137 students responded to practice endocrine MCQs. Participants indicated the answer to the question, their interpretation of it as higher or lower order, and the degree of confidence in their response to the question.
Results
Although there was no significant association between students’ average performance on the content and their question classification (higher or lower), individual students who were less confident in their answer were more than five times as likely (OR = 5.49) to identify a question as higher order than their more confident peers. Students who responded incorrectly to the MCQ were 4 times as likely to identify a question as higher order than their peers who responded correctly.
Conclusions
The results suggest that higher performing, more confident students rely on identifying patterns (even if the question was intended to be higher order). In contrast, less confident students engage in higher-order, analytic thinking even if the question is intended to be lower order. Better understanding of the processes through which students interpret MCQs will help us to better understand the development of clinical reasoning skills.
This article offers insights into students’ perceptions of writing through the use of drawings and written responses. In a descriptive qualitative study of fifth graders across two diverse elementary schools, students were prompted to draw a picture about a recent experience with writing and how that experience made them feel. Students were then asked to write a description of their drawings. We studied features in the drawings and written responses and constructed four thematic categories. Findings highlight the range of both positive and negative experiences with writing as well as a realistic tool for literacy teachers to use to take the temperature of the classroom.
Introduction
As educators seek to improve medical student well‐being, it is essential to understand the interplay between distress and important outcomes. Performance on Step 1 of the United States Medical Licensing Examination has played a significant role in selection for postgraduate residency positions in the United States and consequently has been a source of great stress for medical students. The purpose of this study was to examine whether student well‐being correlates with performance on a high stakes licensing examination.
Methods
Between 2014 and 2016, three sequential cohorts of medical students at the University of Michigan Medical School completed the Medical Student Well‐Being Index (MSWBI) at the end of their 2nd‐year curriculum, shortly before taking Step 1. Associations between well‐being and Step 1 scores were investigated while adjusting for MCAT scores and cumulative second‐year course scores.
Results
In total, 354 students were included in the analysis (68.1% of potential responders). On bivariate analysis, poor student well‐being (0 = low distress [high well‐being], 7 = high distress [poor well‐being]) was associated with lower Step 1 examination scores (slope = −2.10, P < .01), and well‐being accounted for 5% of overall Step 1 score variability (R2 = .05). However, after adjustment for MCAT scores and cumulative GPA (full model R2 = .51), the relationship between well‐being and Step 1 score was no longer significant (slope = −0.70, P‐value = .06).
Conclusions
When controlling for metrics of academic performance, student well‐being prior to taking Step 1 was not associated with how well students performed on Step 1 for the study sample.
Background
The master adaptive learner (MAL) uses self-regulated learning skills to develop adaptive, efficient, and accurate skills in practice. Given rapid changes in healthcare, it is essential that medical students develop into MALs. There is a need for an instrument that can capture MAL behaviors and characteristics. The objective of this study was to develop an instrument for measuring the MAL process in medical students and evaluate its psychometric properties.
Methods
As part of curriculum evaluation, 818 students completed previously developed instruments with validity evidence including the Self-Regulated Learning Perception Scale, Brief Resilience Scale, Goal Orientation Scale, and Jefferson Scale of Physician Lifelong Learning. The authors performed exploratory factor analysis to examine underlying relationships between items. Items with high factor loadings were retained. Cronbach’s alpha was computed. In parallel, the multi-institutional research team rated the same items to provide content validity evidence of the items to MAL model.
Results
The original 67 items were reduced to 28 items loading onto four factors: Planning, Learning, Resilience, and Motivation. Each subscale included the following number of items and Cronbach’s alpha: Planning (10 items, alpha = 0.88), Learning (6 items, alpha = 0.81), Resilience (6 items, alpha = 0.89), and Motivation (6 items, alpha = 0.81). The findings from the factor analyses aligned with the research team ratings of linkage to the components of MAL.
Conclusion
These findings serve as a starting point for future work measuring master adaptive learning to identify and support learners. To fully measure the MAL construct, additional items may need to be developed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.