Background: The ChatGPT, a Large-scale language models-based Artificial intelligence (AI), has fueled interest in medical care. However, the ability of AI to understand and generate text is constrained by the quality and quantity of training data available for that language. This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. Methods: A dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate was used to assess the effectiveness of ChatGPT3.5 in medical knowledge in Chinese language. The indictor of accuracy, concordance (explaining affirms the answer) and frequency of insights was used to assess performance of ChatGPT in original and encoding medical questions. Result: According to our evaluation, ChatGPT received a score of 153.5/300 for original questions in Chinese, which is slightly above the passing threshold of 129/300. Additionally, ChatGPT showed low accuracy in answering open-ended medical questions, with total accuracy of 31.5%. While ChatGPT demonstrated a commendable level of concordance (achieving 90% concordance across all questions) and generated innovative insights for most problems (at least one significant insight for 80% of all questions). Conclusion: ChatGPT's performance was suboptimal for medical education and clinical decision-making in Chinese compared with in English. However, ChatGPT demonstrated high internal concordance and generated multiple insights in Chinese language. Further research should investigate language-based differences in ChatGPT's healthcare performance.
BACKGROUND The ChatGPT, a Large-scale language models-based Artificial intelligence (AI), has fueled interest in medical care. However, the ability of AI to understand and generate text is constrained by the quality and quantity of training data available for that language. This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. OBJECTIVE This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. METHODS A dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate was used to assess the effectiveness of ChatGPT3.5 in medical knowledge in Chinese language. The indictor of accuracy, concordance (explaining affirms the answer) and frequency of insights was used to assess performance of ChatGPT in original and encoding medical questions. RESULTS According to our evaluation, ChatGPT received a score of 153.5/300 for original questions in Chinese, which is slightly above the passing threshold of 129/300. Additionally, ChatGPT showed low accuracy in answering open-ended medical questions, with total accuracy of 31.5%. While ChatGPT demonstrated a commendable level of concordance (achieving 90% concordance across all questions) and generated innovative insights for most problems (at least one significant insight for 80% of all questions). CONCLUSIONS ChatGPT's performance was suboptimal for medical education and clinical decision-making in Chinese compared with in English. However, ChatGPT demonstrated high internal concordance and generated multiple insights in Chinese language. Further research should investigate language-based differences in ChatGPT's healthcare performance. INTERNATIONAL REGISTERED REPORT RR2-https://doi.org/10.1101/2023.04.12.23288452
BACKGROUND ChatGPT, an artificial intelligence (AI) system powered by large-scale language models, has garnered significant interest in the healthcare. Its performance dependent on the quality and amount of training data available for specific language. This study aims to assess the of ChatGPT's ability in medical education and clinical decision-making within the Chinese context. OBJECTIVE Evaluate ChatGPT's performance on the Chinese NMLE conducted within the Chinese context. METHODS We utilized a dataset from the Chinese National Medical Licensing Examination (NMLE) to assess ChatGPT-4's proficiency in medical knowledge within the Chinese language. Performance indicators, including score, accuracy, and concordance (confirmation of answers through explanation), were employed to evaluate ChatGPT's effectiveness in both original and encoded medical questions. Additionally, we translated the original Chinese questions into English to explore potential avenues for improvement. RESULTS ChatGPT scored 442/600 for original questions in Chinese, surpassing the passing threshold of 360/600. However, ChatGPT demonstrated reduced accuracy in addressing open-ended questions, with an overall accuracy rate of 47.7%. Despite this, ChatGPT displayed commendable consistency, achieving a 75% concordance rate across all case analysis questions. Moreover, translating Chinese case analysis questions into English yielded only marginal improvements in ChatGPT's performance (P =0.728). CONCLUSIONS ChatGPT exhibits remarkable precision and reliability when handling the NMLE in Chinese language. Translation of NMLE questions from Chinese to English does not yield an improvement in ChatGPT's performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.