BackgroundMany practicing physicians lack skills in physical examination. It is not known whether physical examination skills already show deficiencies after an early phase of clinical training. At the end of the internal medicine clerkship students are expected to be able to perform a general physical examination in every new patient encounter. In a previous study, the basic physical examination items that should standardly be performed were set by consensus. The aim of the current observational study was to assess whether medical students were able to correctly perform a general physical examination regarding completeness as well as technique at the end of the clerkship internal medicine.MethodsOne hundred students who had just finished their clerkship internal medicine were asked to perform a general physical examination on a standardized patient as they had learned during the clerkship. They were recorded on camera. Frequency of performance of each component of the physical examination was counted. Adequacy of performance was determined as either correct or incorrect or not assessable using a checklist of short descriptions of each physical examination component. A reliability analysis was performed by calculation of the intra class correlation coefficient for total scores of five physical examinations rated by three trained physicians and for their agreement on performance of all items.ResultsApproximately 40% of the agreed standard physical examination items were not performed by the students. Students put the most emphasis on examination of general parameters, heart, lungs and abdomen. Many components of the physical examination were not performed as was taught during precourses. Intra-class correlation was high for total scores of the physical examinations 0.91 (p <0.001) and for agreement on performance of the five physical examinations (0.79-0.92 p <0.001).ConclusionsIn conclusion, performance of the general physical examination was already below expectation at the end of the internal medicine clerkship. Possible causes and suggestions for improvement are discussed.
BackgroundDuring their clerkships, medical students are meant to expand their clinical reasoning skills during their patient encounters. Observation of these encounters could reveal important information on the students’ clinical reasoning abilities, especially during history taking.MethodsA grounded theory approach was used to analyze what expert physicians apply as indicators in their assessment of medical students’ diagnostic reasoning abilities during history taking. Twelve randomly selected clinical encounter recordings of students at the end of the internal medicine clerkships were observed by six expert assessors, who were prompted to formulate their assessment criteria in a think-aloud procedure. These formulations were then analyzed to identify the common denominators and leading principles.ResultsThe main indicators of clinical reasoning ability were abstracted from students’ observable acts during history taking in the encounter. These were: taking control, recognizing and responding to relevant information, specifying symptoms, asking specific questions that point to pathophysiological thinking, placing questions in a logical order, checking agreement with patients, summarizing and body language. In addition, patients’ acts and the course, result and efficiency of the conversation were identified as indicators of clinical reasoning, whereas context, using self as a reference, and emotion/feelings were identified by the clinicians as variables in their assessment of clinical reasoning.ConclusionsIn observing and assessing clinical reasoning during history taking by medical students, general and specific phenomena to be used as indicators for this process could be identified. These phenomena can be traced back to theories on the development and the process of clinical reasoning.
Background: Systematic assessment of clinical reasoning skills of medical students in clinical practice is very difficult. This is partly caused by the lack of understanding of the fundamental mechanisms underlying the process of clinical reasoning. Methods: We previously developed an observation tool to assess the clinical reasoning skills of medical students during clinical practice. This observation tool consists of an 11-item observation rating form (ORT). In the present study we verified the validity, reliability and feasibility of this tool and of an already existing post-encounter rating tool (PERT) in clinical practice among medical students during the internal medicine clerkship. Results: Six raters each assessed the same 15 student-patient encounters. The internal consistency (Cronbach's alfa) for the (ORT) was 0.87 (0.71-0.84) and the 5-item (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p < 0.001) as well as the PERT; 0.36 (p < 0.001). The Generalizability study (G-study) and decision study (D-study) showed that 6 raters are required to achieve a Gcoefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a consistent correlation between the ORT and PERT of 0.53 (p = 0.04). Conclusions: The ORT and PERT are both feasible, valid and reliable instruments to assess students' clinical reasoning skills in clinical practice.
Background Systematic assessment of clinical reasoning skills of medical students in clinical practice is very difficult. This is partly caused by the lack of understanding of the fundamental mechanisms underlying the process of clinical reasoning. Methods We previously developed an observation tool to assess the clinical reasoning skills of medical students during clinical practice. This observation tool consists of an 11-item observation rating form (ORT). In the present study we verified the validity, reliability and feasibility of this tool and of an already existing post-encounter rating tool (PERT) in clinical practice among medical students during the internal medicine clerkshipResults Six raters each assessed the same 15 student-patient encounters. The internal consistency (Cronbach’s alfa) for the (ORT) was 0.87 (0.71-0.84) and the 5-item (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p<0.001) as well as the PERT; 0.36 (p<0.001). The Generalizability study (G-study) and decision study (D-study) showed that 6 raters are required to achieve a G-coefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a consistent correlation between the ORT and PERT of 0.53 (p=0.04)Conclusions The ORT and PERT are both feasible, valid and reliable instruments to assess students’ clinical reasoning skills in clinical practice.
Background The assessment of clinical reasoning by medical students in clinical practice is very difficult. Partly this is because the fundamental mechanisms of clinical reasoning are difficult to uncover and when known, hard to observe and interpret. We developed an observation tool to assess the clinical reasoning ability of medical students during clinical practice. The observation tool consists of an 11-item observation rating form. The validity, reliability and feasibility of this tool were verified among medical students during the internal medicine clerkship and compared to a post-encounter rating tool. Results Six raters assessed each the same 15 student patient encounters. The internal consistency (Cronbach’s alfa) for the observation rating tool (ORT) was 0.87 (0.71-0.84) and the 5-item post encounter rating tool (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p<0.001) as well as the PERT; 0.36 (p<0.001). The G and D-study showed that 6 raters are required to achieve a G-coefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a correlation between the ORT and PERT of 0.53 (p=0.04) Conclusions The ORT and PERT are both feasible, valid and reliable instruments to assess students’ clinical reasoning skills in clinical practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.