Background ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors. Objective We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics. Methods All questions from the UK Foundation Programme Office’s (UKFPO’s) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT’s answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated. Results Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT’s situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors. Conclusions Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.
Different states use a variety of regulatory mechanisms to monitor the quality of practice in occupational therapy. The requirement for mandatory continuing education has been adopted by fewer than half of American states, but there is reason to predict that this trend will increase. This study investigates the patterns linking licensure to continuing education and recommends actions to ensure uniformity and accountability.
UNSTRUCTURED ChatGPT is a language model which has performed well on professional exams in the fields of medicine, law and business. We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT); a national exam taken by all final-year medical students in the United Kingdom (UK). The exam is designed to assess attributes such as communication, team-working, patient safety, prioritisation skills, professionalism and ethics. It differs from other medical exams, such as the United States Medical Licensing Exam (USMLE), as it relies less on memorisation and factual recall. Overall, ChatGPT’s performance was impressive scoring 76% on the SJT but scoring full marks on only a minority of the questions (9%) which may reflect possible flaws in ChatGPT’s situational judgement and/or inconsistencies in the reasoning across questions in the exam itself. ChatGPT performed consistently across the four outlined domains in Good Medical Practice for Doctors. Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for the purpose of standardising questions and providing consistent rationales for examinations involving professionalism and ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.