2023
DOI: 10.1007/s10916-023-01925-4
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios

Abstract: This paper aims to highlight the potential applications and limits of a large language model (LLM) in healthcare. ChatGPT is a recently developed LLM that was trained on a massive dataset of text for dialogue with users. Although AI-based language models like ChatGPT have demonstrated impressive capabilities, it is uncertain how well they will perform in real-world scenarios, particularly in fields such as medicine where high-level and complex thinking is necessary. Furthermore, while the use of ChatGPT in wri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
145
1

Year Published

2023
2023
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 384 publications
(151 citation statements)
references
References 14 publications
2
145
1
Order By: Relevance
“…Importantly, the concept of ChatGPT hallucination could be risky if the generated content is not thoroughly evaluated by researchers and health providers with proper expertise [ 37 , 56 , 73 , 77 , 79 ]. This comes in light of the ability of ChatGPT to generate incorrect content that appears plausible from a scientific point of view [ 81 ].…”
Section: Discussionmentioning
confidence: 99%
“…Importantly, the concept of ChatGPT hallucination could be risky if the generated content is not thoroughly evaluated by researchers and health providers with proper expertise [ 37 , 56 , 73 , 77 , 79 ]. This comes in light of the ability of ChatGPT to generate incorrect content that appears plausible from a scientific point of view [ 81 ].…”
Section: Discussionmentioning
confidence: 99%
“…Incorporating AI and large language models (LLMs) such as ChatGPT into healthcare necessitates meticulous attention to emerging challenges and concerns. While ChatGPT's adeptness in mimicking human dialogue presents opportunities for improving patient-provider interactions and potentially enhancing patient adherence to prescribed treatments, the model's limited grasp of context and nuance, along with its failure to consistently recognize its own limitations, highlights the perils of unsupervised deployment in clinical settings [ 22 , 23 ].…”
Section: Reviewmentioning
confidence: 99%
“…Another limitation of ChatGPT is its inability to incorporate external knowledge sources, limiting its accuracy in the medical field. In contrast to some other AI and chatbots, ChatGPT is not designed to extract information from external sources such as medical journals or textbooks, which can provide additional context and background knowledge [ 22 ]. Furthermore, ChatGPT may not generate responses incorporating the latest medical research or best practices without its ability to integrate updated and reliable data, considering that its original data training was up until 2021 [ 40 ].…”
Section: Reviewmentioning
confidence: 99%
“…Conversely, other areas, such as predicting postoperative complications in perioperative medicine, have not yet produced the desired results. Although many predictive models have been published, most are still in the research stage, and a valid and universally applicable intelligent tool for clinical practice has yet to be developed [ 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ].…”
Section: Clinical Practice and Research Perspectivesmentioning
confidence: 99%