2023
DOI: 10.3390/healthcare11142046
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT Knowledge Evaluation in Basic and Clinical Medical Sciences: Multiple Choice Question Examination-Based Performance

Abstract: The Chatbot Generative Pre-Trained Transformer (ChatGPT) has garnered great attention from the public, academicians and science communities. It responds with appropriate and articulate answers and explanations across various disciplines. For the use of ChatGPT in education, research and healthcare, different perspectives exist with some level of ambiguity around its acceptability and ideal uses. However, the literature is acutely lacking in establishing a link to assess the intellectual levels of ChatGPT in th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 34 publications
0
8
1
Order By: Relevance
“…The overall performance of ChatGPT was lesser than the medical students. More recently, Meo and Al-Masri ( 32 ) reported that ChatGPT obtained 72% marks, a reasonable score in basic and clinical medical sciences MCQs-based examination. However, in another study, Meo and Al-Khlaiwi et al ( 33 ) compared the Bard and ChatGPT knowledge in three different topics including endocrinology, diabetes, and diabetes technology through MCQ examination.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The overall performance of ChatGPT was lesser than the medical students. More recently, Meo and Al-Masri ( 32 ) reported that ChatGPT obtained 72% marks, a reasonable score in basic and clinical medical sciences MCQs-based examination. However, in another study, Meo and Al-Khlaiwi et al ( 33 ) compared the Bard and ChatGPT knowledge in three different topics including endocrinology, diabetes, and diabetes technology through MCQ examination.…”
Section: Discussionmentioning
confidence: 99%
“…ChatGPT provides accurate information, however, it might generate incorrect answer responses. The most probable reason is that the information is an outdated misinterpretation of complex questions, lengthy questions, and formulas ( 32 , 33 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Meo’s team created a topic-based question bank containing multiple-choice questions from various medical textbooks and university exam banks to study the ChatGPT’s level of knowledge, multiple-choice questions in basic and clinical medicine and clinical medical education, education in terms of knowledge level, performance on multiple-choice exams and its impact on the medical examination system. 28 The results showed that ChatGPT obtained 37/50 (74%) scores in basic medicine and 35/50 (70%) scores in clinical medicine, with an overall mean score of 72/100 (72%). The authors concluded that the ChatGPT achieved satisfactory scores in both basic and clinical medicine subjects and demonstrated some degree of comprehension and interpretation, suggesting that the ChatGPT may have a tutoring role for teachers as well as students in a medical education setting.…”
Section: Medical Educationmentioning
confidence: 94%
“… 1 ChatGPT (Chat Generative Pre-trained Transformer), a state-of-the-art language model, has emerged as a potential tool in medical education. 2 This tool operates primarily through prompt interpretation and is capable of producing reasoned responses that are difficult to distinguish from human-produced language. 3 Its intrinsic transformer architecture also enables ChatGPT to be proficient in understanding natural language.…”
Section: Introductionmentioning
confidence: 99%