The present study shows the beneficial influence of generating self-explanations when dealing with less familiar clinical contexts. Generating self-explanations without feedback resulted in better diagnostic performance than in the control group at 1 week after the intervention.
OBJECTIVE General guidelines for teaching clinical reasoning have received much attention, despite a paucity of instructional approaches with demonstrated effectiveness. As suggested in a recent experimental study, self-explanation while solving clinical cases may be an effective strategy to foster reasoning in clinical clerks dealing with less familiar cases. However, the mechanisms that mediate this benefit have not been specifically investigated. The aim of this study was to explore the types of knowledge used by students when solving familiar and less familiar clinical cases with self-explanation.METHODS In a previous study, 36 third-year medical students diagnosed familiar and less familiar clinical cases either by engaging in self-explanation or not. Based on an analysis of previously collected data, the present study compared the content of self-explanation protocols generated by seven randomly selected students while solving four familiar and four less familiar cases. In total, 56 verbal protocols (28 familiar and 28 less familiar) were segmented and coded using the following categories: paraphrases, biomedical inferences, clinical inferences, monitoring statements and errors.RESULTS Students provided more self-explanation segments from less familiar cases (M = 275.29) than from familiar cases (M = 248.71, p = 0.046). They provided significantly more paraphrases (p = 0.001) and made more errors (p = 0.008). A significant interaction was found between familiarity and the type of inferences (biomedical versus clinical, p = 0.016). When self-explaining less familiar cases, students provided significantly more biomedical inferences than familiar cases.
Self-explanation seems to be an effective technique to help medical students learn clinical reasoning. Its impact is increased significantly by combining it with examples of residents' SEs and prompts. Although students' exposure to examples of clinical reasoning is important, their 'active processing' of these examples appears to be critical to their learning from them.
Educational strategies that promote the development of clinical reasoning in students remain scarce. Generating self-explanations (SE) engages students in active learning and has shown to be an effective technique to improve clinical reasoning in clerks. Example-based learning has been shown to support the development of accurate knowledge representations. The purpose of this study was to investigate the effect of combining student's SE and observation of peer's or expert's SE examples on diagnostic performance. Fifty-three third-year medical students were assigned to a peer SE example, an expert SE example or control (no example) group. All participants solved a set of the same four clinical cases (training cases), 1-after SE, 2-after listening to a peer or expert SE example or after a control task, and 3-1 week later. They solved a new set of four different cases (transfer cases) also 1 week later. For training cases, students improved significantly their diagnostic performance overtime but the main effect of group was not significant suggesting that students' SE mainly drives the observed effect. On transfer cases, there was no difference between the three groups (p > .05). Educational implications are discussed and further studies on different types of examples and additional strategies to help students actively process examples are proposed.
Background Self-explanation without feedback has been shown to improve medical students’ diagnostic reasoning. While feedback is generally seen as beneficial for learning, available evidence of the value of its combination with self-explanation is conflicting. This study investigated the effect on medical students’ diagnostic performance of adding immediate or delayed content-feedback to self-explanation while solving cases. Methods Ninety-four 3rd-year students from a Canadian medical school were randomly assigned to three experimental conditions (immediate-feedback, delayed-feedback, control). In the learning phase, all students solved four clinical cases by giving i) the most likely diagnosis, ii) two main arguments supporting this diagnosis, and iii) two plausible alternative diagnoses, while using self-explanation. The immediate-feedback group was given the correct diagnosis after each case; delayed-feedback group received the correct diagnoses only after the four cases; control group received no feedback. One week later, all students solved four near-transfer (i.e., same final diagnosis as the learning cases but different scenarios) and four far-transfer cases (i.e., different final diagnosis from the learning cases and different scenarios) by answering the same three questions. Students’ diagnostic accuracy (score for the response to the first question only) and diagnostic performance (combined score of responses to the three questions) scores were assessed in each phase. Four one-way ANOVAs were performed on each of the two scores for near and far-transfer cases. Results There was a significant effect of experimental condition on diagnostic accuracy on near-transfer cases ( p < .05). The immediate-feedback and delayed-feedback groups performed equally well, both better than control (respectively, mean = 90.73, standard deviation =10.69; mean = 89.92, standard deviation = 13.85; mean = 82.03, standard deviation = 17.66). The experimental conditions did not significantly differ on far-transfer cases. Conclusions Providing feedback to students in the form of the correct diagnosis after using self-explanation with clinical cases is potentially beneficial to improve their diagnostic accuracy but this effect is limited to similar cases. Further studies should explore how more elaborated feedback combined with self-explanation may impact students’ diagnostic performance on different cases. Electronic supplementary material The online version of this article (10.1186/s12909-019-1638-3) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.