Objectives To compare stand-alone multiple choice questions (MCQs) and integrated clinical-scenario (case cluster) multiple choice questions (CS-MCQs) in a problem-based learning (PBL) environment. Methods A retrospective descriptive analysis of MCQ examinations was conducted in a course that integrates the subspecialties of anatomical pathology, chemical pathology, hematology, immunology, microbiology and pharmacology. The MCQ items were analyzed for their reliability (Kuder–Richardson-20, KR-20), level of difficulty (Pi), discrimination index (Di), item distractors and student performances. The statistical analysis of the results was extracted from the integrity online item-analysis programme. The results of the standard stand-alone and CS multiple choice questions were compared. Results KR-20 for the CS-MCQs and stand-alone MCQs was consistently high. KR-20 and Pi were higher for the CS-MCQs. There was no significant difference between the CS-MCQs and stand-alone MCQs in Pi and Di. A range of difficulty levels was found based on Bloom's taxonomy. The mean scores for the class were higher for the CS-MCQ examination. The compilation of the CS-MCQ examination was more challenging. Conclusions CS-MCQs compare favorably to stand-alone MCQs and provide opportunities for the integration of sub-specialties and assessment in keeping with PBL. They assess students' cognitive skills and are reliable and practical. Different levels of item difficulty promote multi-logical and critical thinking. Students' scores were higher for the CS-MCQ examination, which may suggest better understanding of the material and/or better question clarity. The scenarios have to flow logically. Increasing the number of scenarios ensures the examination of more course content.
BackgroundAt the University of the West Indies, Trinidad and Tobago, third year undergraduate teaching is a hybrid of problem-based learning (PBL) and didactic lectures. PBL discourages students from simply getting basic factual knowledge but encourages them to integrate these basic facts with clinical knowledge and skills. Recently progressive disclosure questions (PDQ) also known as modified essay questions (MEQs) were introduced as an assessment tool which is reported to be in keeping with the PBL philosophy.ObjectiveTo describe the effectiveness of the PDQ as an assessment tool in a course that integrates the sub-specialties of Anatomical Pathology, Chemical Pathology, Haematology, Immunology, Microbiology, Pharmacology and Public Health.MethodsA descriptive analysis of examination questions in PDQs, and the students’ performance in these examinations was performed for the academic years 2011–2012, 2012–2013, and 2013–2014 in one-third year course that integrates Anatomical Pathology, Chemical Pathology, Haematology, Immunology, Microbiology, Pharmacology and Public Health.ResultsThe PDQs reflected real life scenarios and were composed of questions of different levels of difficulty by Blooms’ Taxonomy, from basic recall through more difficult questions requiring analytical, interpretative and problem solving skills. The integrated PDQs in the years 2011–2012, 2012–2013, 2013–2014 respectively was 52.9, 52.5, 58 % simple recall of facts. By sub-specialty this ranged from 26.7 to 100 %, 18.8 to 70 %, and 23.1 to 100 % in the 3 years respectively. The rest required higher order cognitive skills. For some sub-specialties, students’ performance was better where the examination was mostly basic recall, and was poorer where there were more higher-order questions. The different sub-specialties had different percentages of contribution in the integrated examinations ranging from 4 % in Public health to 22.9 % in Anatomical Pathology.ConclusionThe PDQ asked students questions in an integrated fashion in keeping with the PBL process. More care should be taken to ensure appropriate questions are included in the examinations to assess higher order cognitive skills. However in an integrated course, some sub-specialties may not have content requiring higher cognitive level questions in certain clinical cases. More care should be taken in choosing clinical cases that integrate all the sub-specialties.
Background Ensuring objectivity and maintaining reliability are necessary in order to consider any form of assessment valid. Evaluation of students in Problem-Based Learning (PBL) tutorials by the tutors has drawn the attention of critiques citing many challenges and limitations. The aim of this study was to determine the extent of tutor variability in assessing the PBL process in the Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, Trinidad and Tobago. Method All 181 students of year 3 MBBS were assigned randomly to 14 PBL groups. Out of 18 tutors, 12 had an opportunity to assess three groups: one assessed 2 groups and 4 tutors assessed one group each; at the end each group had been assessed three times by different tutors. The tutors used a PBL assessment rating scale of 12 different criteria on a six-point scale to assess each PBL Group. To test the stated hypotheses, independent t-test, one-way ANOVA followed by post-hoc Bonferroni test, Intra Class Correlation, and Pearson product moment correlations were performed. Result The analysis revealed significant differences between the highest- and lowest-rated groups (t-ratio = 12.64; p < 0.05) and between the most lenient and most stringent raters (t-ratio = 27.96; p < 0.05). ANOVA and post-hoc analysis for highest and lowest rated groups revealed that lenient- and stringent-raters significantly contribute ( p < 0.01) in diluting the score in their respective category. The intra class correlations (ICC) among rating of different tutors for different groups showed low agreement among various ratings except three groups (Groups 6, 8 and 13) ( r = 0.40). The correlation between tutors’ PBL experiences and their mean ratings was found to be moderately significant ( r = 0.52; p > 0.05). Conclusion Leniency and stringency factors amongst raters affect objectivity and reliability to a great extent as is evident from the present study. Thus, more rigorous training in the areas of principles of assessment for the tutors are recommended. Moreover, putting that knowledge into practice to overcome the leniency and stringency factors is essential.
Background: With changes in teaching methods in medicine, assessment tools have also evolved in order to be valid, reliable, practical, analyzable and not time-consuming. After questions of reliability and practicality when using free response short answer questions, we replaced them with extended matching questions (EMQs). Previous analysis of the same group of students, in the same time period, showed high reliability and discrimination with standard multiple choice questions (MCQ). Objective was to describe the efficiency of Extended matching questions (EMQ) in third-year medicine coursesMethods: Castler-Rock Integrity programme, item analyzed reports of EMQ results over a three-year period were analyzed. There were 25EMQ items in each course, each year, with 9 option answers.Results: The Kuder Richardson-20 reliability mean ranged from 0.447 to 0.674. Spearman-Brown split half-reliability coefficient mean ranged from 0.443 to 0.685. Spearman-Brown prophecy reliability formula mean from 0.614 to 0 837. The Guttman split-half reliability coefficient mean ranged from 0.441 to 0.718. The difficulty mean ranged from 0.491 to 0.719. The Corrected point bi-serial coefficient ratio mean was 0.118 to 0.255. The number of items with all-functioning distractors ranged from 16% to 40%, and the total number of non-functioning distractors ranged from 14.5% to 28%.Conclusions: EMQs showed reliability, though lower than with the MCQs previously analyzed. This may be due to the much smaller numbers hence increasing numbers of EMQs should be considered. There was a high number of functioning distractors. Poor distractors should be revised.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.