The inconsistency of the marking in clinical examinations is a well documented problem. This project identified some of the factors responsible for this inconsistency. A standardized rating situation was devised. Five students were videotaped as they performed part of a physical examination on simulated patients. Eighteen experienced medical and surgical examiners rated their performances using an objective checklist type of rating form. No differences were evident between physicians and surgeons. The group of examiners was divided into three subgroups, one receiving no training, one limited training and one more extensive training. Examiners re-rated the same students 2 months after the first rating. Inter-rater reliability was satisfactory for the first ratings and training produced no significant improvement. A substantial improvement was achieved by identifying the most inconsistent raters and removing them from the analysis. Training was shown to be unnecessary for consistent examiners and ineffective for examiners who were less consistent. On the basis of these results, only consistent examiners were selected to take part in the interactive component of the objective structured final year examinations. The ratings in these examinations achieved high levels of inter-rater reliability. It was concluded that the combination of an objective check-list rating form, a controlled test situation and the selection of inherently consistent examiners could solve the problem of inconsistent marking in clinical examinations.
In a previous study we described a problem-based criterion-referenced test of the clinical competence of medical students which was felt to offer advantages over the traditional final-year examination. This paper reports the validity and reliability studies on which it is possible to judge the value of this new test when compared to the traditional approach. The results demonstrate a high level of content validity and provide evidence of the construct validity of the test. Efforts to obtain measures of concurrent and predictive validity were thwarted by a failure to attain reliable assessments of ward performance from resident and consultant staff. Satisfactory levels of internal consistency were established for the whole test. Marker reliability was satisfactory in all sections of the test except for those requiring examiners to rate practical clinical skills. This was so despite the use of simulated patients, behavioural check-lists and rater training. Possible solutions to this problem are discussed. It is concluded that this new approach overcomes many of the measurement problems inherent in the traditional final examination. It has been shown to be feasible to construct and administer in the medical school setting without the need for the allocation of additional resources.
Patient management problems (PMP) are being used in medical examinations with increasing frequency despite evidence which throws doubt on their validity as measures of clinical competence. This study investigated the construct validity of a PMP constructed in both written and interview formats. Each test was administered to groups of students of different seniorities and to two groups of doctors, interns and post-interns. The pattern of scores for the different groups was not that expected of a valid test of competence. The most competent groups (the post-interns) generally scored less well on the calculated indices than the senior students and interns. These findings were similar for both formats of the test so cueing was not thought to be the major factor. It appears that the scoring system is at fault. A comparison of performance on the written and interview (uncued) formats showed that many more options were chosen by all groups tested on the written PMP. It was concluded that written PMPs cannot yet be regarded as a valid simulation of clinical performance. Although content validity is high this does not appear to be so for construct validity or concurrent validity.
Monitoring nutritional intake is an important aspect of the care of older people, particularly for those at risk of malnutrition. Current practice for monitoring food intake relies on hand written food charts that have several inadequacies. We describe the design and validation of a tool for computer-assisted visual assessment of patient food and nutrient intake. To estimate food consumption, the application compares the pixels the user rubbed out against predefined graphical masks. Weight of food consumed is calculated as a percentage of pixels rubbed out against pixels in the mask. Results suggest that the application may be a useful tool for the conservative assessment of nutritional intake in hospitals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.