Tests should be given often and spaced out in time to promote better retention of information. Questions that require effortful recall produce the greatest gains in memory. Feedback is crucial to learning from tests. Test-enhanced learning may be an effective tool for medical educators to use in promoting retention of clinical knowledge.
Repeated testing with feedback appears to result in significantly greater long-term retention of information taught in a didactic conference than repeated, spaced study. Testing should be considered for its potential impact on learning and not only as an assessment device.
Exome sequencing has markedly enhanced the discovery of genes implicated in Mendelian disorders, particularly for individuals in whom a known clinical entity could not be assigned. This has led to the recognition that phenotypic heterogeneity resulting from allelic mutations occurs more commonly than previously appreciated. Here, we report that missense variants in CDC42, a gene encoding a small GTPase functioning as an intracellular signaling node, underlie a clinically heterogeneous group of phenotypes characterized by variable growth dysregulation, facial dysmorphism, and neurodevelopmental, immunological, and hematological anomalies, including a phenotype resembling Noonan syndrome, a developmental disorder caused by dysregulated RAS signaling. In silico, in vitro, and in vivo analyses demonstrate that mutations variably perturb CDC42 function by altering the switch between the active and inactive states of the GTPase and/or affecting CDC42 interaction with effectors, and differentially disturb cellular and developmental processes. These findings reveal the remarkably variable impact that dominantly acting CDC42 mutations have on cell function and development, creating challenges in syndrome definition, and exemplify the importance of functional profiling for syndrome recognition and delineation.
CONTEXT Educators often encourage students to engage in active learning by generating explanations for the material being learned, a method called self-explanation. Studies have also demonstrated that repeated testing improves retention. However, no studies have directly compared the two learning methods.METHODS Forty-seven Year 1 medical students completed the study. All students participated in a teaching session that covered four clinical topics and was followed by four weekly learning sessions. In the learning sessions, students were randomised to perform one of four learning activities for each topic: testing with self-generated explanations (TE); testing without explanations (T); studying a review sheet with self-generated explanations (SE), and studying a review sheet without explanations (S). Students repeated the same activity for each topic in all four sessions. Six months later, they took a free-recall clinical application test on all four topics.RESULTS Repeated testing led to better longterm retention and application than repeatedly studying the material (p < 0.0001, g 2 = 0.33). Repeated generation of self-explanations also improved long-term retention and application, but the effect was smaller (p < 0.0001, g 2 = 0.08). When data were collapsed across topics, both testing conditions produced better final test performance than studying with self-explanation (. Studying with selfexplanation led to better retention and application than studying without selfexplanation (SE = 29% > S = 20%; p = 0.001, d = 0.68). Our analyses showed significant interaction by topic (p = 0.001, g 2 = 0.06), indicating some variation in the effectiveness of the interventions among topics.CONCLUSIONS Testing and generating selfexplanations are both learning activities that can be used to produce superior long-term retention and application of knowledge, but testing is generally more effective than selfexplanation alone.
Previous research has shown that repeated retrieval with written tests produces superior long-term retention compared to repeated study. However, the degree to which this increased retention transfers to clinical application has not been investigated. In addition, increased retention obtained through written testing has not been compared to other forms of testing, such as simulation testing with a standardized patient (SP). In our study, 41 medical students learned three clinical topics through three different learning activities: testing with SPs, testing using written tests, and studying a review sheet. Students were randomized in a counter-balanced fashion to engage in one learning activity per topic. They participated in four weekly testing/studying sessions to learn the material, engaging in the same activity for a given topic in each session. Six months after initial learning, they returned to take an SP test on each topic, followed by a written test on each topic 1 week later. On both forms of final testing, we found that learning through SP testing and written testing generally produced superior long-term retention compared to studying a review sheet. SP testing led to significantly better performance on the final SP test relative to written testing, but there was no significant difference between the two testing conditions on the final written test. Overall, our study shows that repeated retrieval practice with both SPs and written testing enhances long-term retention and transfer of knowledge to a simulated clinical application.
IntroductionA large body of evidence indicates that retrieval practice (test-enhanced learning) and spaced repetition increase long-term information retention. Implementation of these strategies in medical curricula is unfortunately limited. However, students may choose to apply them autonomously when preparing for high-stakes, cumulative assessments, such as the United States Medical Licensing Examination Step 1. We examined the prevalence of specific self-directed methods of testing, with or without spaced repetition, among preclinical students and assessed the relationship between these methods and licensing examination performance.MethodsSeventy-two medical students at one institution completed a survey concerning their use of user-generated (Anki) or commercially-available (Firecracker) flashcards intended for spaced repetition and of boards-style multiple-choice questions (MCQs). Other information collected included Step 1 score, past academic performance (Medical College Admission Test [MCAT] score, preclinical grades), and psychological factors that may have affected exam preparation or performance (feelings of depression, burnout, and test anxiety).ResultsAll students reported using practice MCQs (mean 3870, SD 1472). Anki and Firecracker users comprised 31 and 49 % of respondents, respectively. In a multivariate regression model, significant independent predictors of Step 1 score included MCQs completed (unstandardized beta coefficient [B] = 2.2 × 10− 3, p < 0.001), unique Anki flashcards seen (B = 5.9 × 10− 4, p = 0.024), second-year honours (B = 1.198, p = 0.002), and MCAT score (B = 1.078, p = 0.003). Test anxiety was a significant negative predictor (B= − 1.986, p < 0.001). Unique Firecracker flashcards seen did not predict Step 1 score. Each additional 445 boards-style practice questions or 1700 unique Anki flashcards was associated with an additional point on Step 1 when controlling for other academic and psychological factors.ConclusionsMedical students engage extensively in self-initiated retrieval practice, often with spaced repetition. These practices are associated with superior performance on a medical licensing examination and should be considered for formal support by educators.
Incorporating TBL into the pre-clinical pediatrics curriculum led to large gains in knowledge over the short-term, but these gains did not persist. Further research should focus on extending the impact of TBL on long-term knowledge retention.
Learning goals are potentially powerful tools to mediate interactions between students, supervisors and patients, and to reconcile contradictions in work-based learning environments. Learning goals provide a means to develop not only learners, but also learning systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.