Background The competence of ChatGPT (Chat Generative Pre-Trained Transformer) in non-English languages is not well studied. Objective This study compared the performances of GPT-3.5 (Generative Pre-trained Transformer) and GPT-4 on the Japanese Medical Licensing Examination (JMLE) to evaluate the reliability of these models for clinical reasoning and medical knowledge in non-English languages. Methods This study used the default mode of ChatGPT, which is based on GPT-3.5; the GPT-4 model of ChatGPT Plus; and the 117th JMLE in 2023. A total of 254 questions were included in the final analysis, which were categorized into 3 types, namely general, clinical, and clinical sentence questions. Results The results indicated that GPT-4 outperformed GPT-3.5 in terms of accuracy, particularly for general, clinical, and clinical sentence questions. GPT-4 also performed better on difficult questions and specific disease questions. Furthermore, GPT-4 achieved the passing criteria for the JMLE, indicating its reliability for clinical reasoning and medical knowledge in non-English languages. Conclusions GPT-4 could become a valuable tool for medical education and clinical support in non–English-speaking regions, such as Japan.
Increasingly popular worldwide, Japanese cuisine includes several raw preparations such as sashimi and sushi; however, limited information on food poisoning from Japanese local food is available in English literature. Without appropriate knowledge, physicians may underdiagnose traveler's diarrhea among people returning from Japan. To provide accurate information to primary care physicians worldwide, we conducted a narrative review on food poisoning research published in Japanese and English over the past four years, considering the frequency and clinical importance of various presentations.
BACKGROUND Background: ChatGPT’s competence in non-English languages is not well studied. OBJECTIVE Objective: Thus, this study compares the performance of ChatGPT and GPT-4 in the Japanese Medical Licensing Examination (JMLE) to evaluate the reliability of these models in clinical reasoning and medical knowledge in non-English languages. METHODS Methods: The study used the default mode of ChatGPT, based on GPT-3.5, the GPT-4 model of ChatGPT plus, and the 2022 JMLE, No. 117. A total of 254 questions were included in the final analysis, which were categorized into three types, namely general, clinical, and clinical sentence questions. RESULTS Results: The results showed that GPT-4 outperformed ChatGPT in terms of accuracy, particularly for general clinical and clinical sentence questions. GPT-4 also performed better on difficult questions and specific disease questions. Furthermore, GPT-4 achieved the passing criteria for JMLE, indicating its reliability for clinical reasoning and medical knowledge in non-English languages. CONCLUSIONS Conclusions: GPT-4 could become a valuable tool for medical education and clinical support in non-English-speaking regions, such as Japan.
The treatment of rheumatoid arthritis (RA) has advanced from the use of steroids to disease-modifying antirheumatic drugs (DMARDs) and biologics such as tumor necrosis factor (TNF) and interleukin-6 (IL-6) inhibitors. Historically, steroids have been the mainstream in the clinical treatment of RA; however, the development of DMARDs has changed the RA treatment structure. In addition, biologics can alleviate RA symptoms. This case report describes the secondary failure of tocilizumab in treating RA with fatigue symptoms. Treatment with tocilizumab decreases C-reactive protein (CRP) levels, which may make detecting RA exacerbation difficult; therefore, obtaining the patient's precise history and thorough physical examinations are necessary. This case demonstrates the complexity of treating elderly-onset RA and reports practical methods for effective treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.