2024
DOI: 10.1016/j.ijom.2023.09.005
|View full text |Cite
|
Sign up to set email alerts
|

The impact and opportunities of large language models like ChatGPT in oral and maxillofacial surgery: a narrative review

B. Puladi,
C. Gsaxner,
J. Kleesiek
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 47 publications
0
11
0
Order By: Relevance
“…A plausible explanation for this discrepancy can be related to different question styles and different exam settings. Taken together, this highlights the need to assess the performance of AI-based models in various disciplines, using different questions' format, and compared to human performance (Borchert et al, 2023;Chen et al, 2023;Deiana et al, 2023;Flores-Cohaila et al, 2023;Puladi et al, 2023). Finally, it is important to acknowledge the limitations inherent in this study.…”
Section: Discussionmentioning
confidence: 91%
“…A plausible explanation for this discrepancy can be related to different question styles and different exam settings. Taken together, this highlights the need to assess the performance of AI-based models in various disciplines, using different questions' format, and compared to human performance (Borchert et al, 2023;Chen et al, 2023;Deiana et al, 2023;Flores-Cohaila et al, 2023;Puladi et al, 2023). Finally, it is important to acknowledge the limitations inherent in this study.…”
Section: Discussionmentioning
confidence: 91%
“…Thus, ensuring the generation of correct, reliable, and credible medical information is of high importance and should be considered by AI model developers, considering the current evidence showing a generation of inaccurate information by these AI-based models [26][27][28]. Additionally, such an approach is recommended in various health domains given the intricacies and peculiarities of each subject (e.g., maxillofacial surgery, dentistry, and pharmacy) [29][30][31][32].…”
Section: Discussionmentioning
confidence: 99%
“…This phenomenon of AI "hallucination" [6,32,34,41,44,[47][48][49][50][51][52][53], previously described as "stochastic parroting" [19], directly challenges the ethical principle of nonmaleficence, which calls for the avoidance of patient harm. In healthcare, the implications of such inaccuracies can be profound, ranging from false diagnoses to the suggestion of nonexistent symptoms or treatment protocols, thereby risking patient safety [36,[54][55][56]. While LLMs can help to guide surgical candidacies and recommend treatment plans rooted in evidence-based data analysis, there exists a concern regarding their reliability [35,36,57].…”
Section: Fabricated Content and "Hallucinations"mentioning
confidence: 99%