Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2023
DOI: 10.7759/cureus.40977
|View full text |Cite
|
Sign up to set email alerts
|

Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology

Abstract: BackgroundArtificial intelligence (AI) is evolving in the medical education system. ChatGPT, Google Bard, and Microsoft Bing are AI-based models that can solve problems in medical education. However, the applicability of AI to create reasoning-based multiple-choice questions (MCQs) in the field of medical physiology is yet to be explored. ObjectiveWe aimed to assess and compare the applicability of ChatGPT, Bard, and Bing in generating reasoning-based MCQs for MBBS (Bachelor of Medicine, Bachelor of Surgery) u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…In a cross-sectional study, Agarwal et.al [22] However, the difficulty level was lower compared to Bard and Bing (Table 2).…”
Section: Descriptive Summary Of Resultsmentioning
confidence: 88%
See 1 more Smart Citation
“…In a cross-sectional study, Agarwal et.al [22] However, the difficulty level was lower compared to Bard and Bing (Table 2).…”
Section: Descriptive Summary Of Resultsmentioning
confidence: 88%
“…In a cross-sectional study, Agarwal et.al [22] compared different LLMs. They compared Chat-GPT 3.5/Bard/Bing in MCQs generating capability.…”
Section: Resultsmentioning
confidence: 99%
“…However, these are just suppositions as we did not compare our method with others, such as AIG-using chatbots or zero-shot MCQs. Previous studies attained scoring inferences of chatbots developed MCQs without using learning objectives (Agarwal et al, 2023;Ayub et al, 2023). However, these studies did not assess for generalization or extrapolation inferences.…”
Section: Discussionmentioning
confidence: 99%
“…A crosssectional study used ChatGPT, Google Bard and Microsoft Bing to develop MCQs for a physiology course, in this study a careful blueprint was mapped by two content experts; inferences of scoring and generalization were collected. However, generalization was collected subjectively by asking content experts to classify the di culty of MCQs (Agarwal et al, 2023). Other study used a qualitative methodology to assess MCQs developed by ChatGPT and ChatPDF for the dermatology board examination.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Recently, numerous articles have emerged about AI applications in different fields of medicine such as radiology, dermatology, physiology, hematology, ophthalmology, biochemistry, parasitology, neurosurgery, forensic medicine, dental education, etc. [5][6][7][8][9][10][11][12][13][14]. These models have been helpful in solving complex medical problems, interpreting radiology reports, being used to diagnose diseases, writing scientific articles or answering and generating different medical exam questions and have shown varying degrees of accuracy in these fields [2][3][4][5]7,8,10,11,15].…”
Section: Introductionmentioning
confidence: 99%