2023
DOI: 10.1101/2023.06.29.23292057
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparison of ChatGPT vs. Bard to Anesthesia-related Queries

Abstract: We investigated the ability of large language models (LLMs) to answer anesthesia related queries prior to surgery from a patient′s point of view. In the study, we introduced textual data evaluation metrics, investigated ″hallucinations″ phenomenon, and evaluated feasibility of using LLMs at the patient-clinician interface. ChatGPT was found to be lengthier, intellectual, and effective in its response as compared to Bard. Upon clinical evaluation, no ″hallucination″ errors were reported from ChatGPT, whereas we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…Both chatbots provided illogical answers and did not always address the knowledge content in the questions. Similarly, Patnaik & Hoffmann, U [ 53 ]. in Texas compare the performance of ChatGPT vs. Bard to answer anesthesia-related queries prior to surgery from a patient's point of view and conclude that though both gave correct responses, they should be considered as useful clinical resource to assist communication between clinicians and patients and not a replacement for the pre-anesthesia consultation.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Both chatbots provided illogical answers and did not always address the knowledge content in the questions. Similarly, Patnaik & Hoffmann, U [ 53 ]. in Texas compare the performance of ChatGPT vs. Bard to answer anesthesia-related queries prior to surgery from a patient's point of view and conclude that though both gave correct responses, they should be considered as useful clinical resource to assist communication between clinicians and patients and not a replacement for the pre-anesthesia consultation.…”
Section: Discussionmentioning
confidence: 99%
“…Rao et al [ 51 ] proposed a qualitative framework for the Myers–Briggs Type Indicator [ 52 ] by comparing the performance of LLMs in evaluating human personalities. Patnaik & Hoffmann [ 53 ], who investigated the hallucinations of LLMs in clinical medicine - on patients' view of anesthesia before surgery, found that ChatGPT exhibited intellectually superior performance over Bard in psychiatry. Patil et al [ 54 ] compare the radiology knowledge of ChatGPT and Bard and conclude that both display reasonable radiology knowledge and should be used with conscious knowledge of their limitations.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation