2023
DOI: 10.1177/21925682231195783
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT and its Role in the Decision-Making for the Diagnosis and Treatment of Lumbar Spinal Stenosis: A Comparative Analysis and Narrative Review

Abstract: Study Design Comparative Analysis and Narrative Review. Objective To assess and compare ChatGPT’s responses to the clinical questions and recommendations proposed by The 2011 North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Lumbar Spinal Stenosis (LSS). We explore the advantages and disadvantages of ChatGPT’s responses through an updated literature review on spinal stenosis. Methods We prompted ChatGPT with questions from the NASS Evidence-based Clinical Gu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 47 publications
1
6
0
Order By: Relevance
“…On the topic of post-colonoscopy management, ChatGPT provided responses with 90% adherence to guidelines and 85% accuracy [ 13 ], suggesting beneficial use for healthcare providers and patients. ChatGPT’s recommendations on the management of lumbar spinal stenosis were also in line with findings in the current literature [ 9 ]. When asked guideline-based questions on urological topics, ChatGPT provided only 60% appropriate responses [ 10 ].…”
Section: Discussionsupporting
confidence: 72%
See 1 more Smart Citation
“…On the topic of post-colonoscopy management, ChatGPT provided responses with 90% adherence to guidelines and 85% accuracy [ 13 ], suggesting beneficial use for healthcare providers and patients. ChatGPT’s recommendations on the management of lumbar spinal stenosis were also in line with findings in the current literature [ 9 ]. When asked guideline-based questions on urological topics, ChatGPT provided only 60% appropriate responses [ 10 ].…”
Section: Discussionsupporting
confidence: 72%
“…ChatGPT has already been tested and compared to several clinical guidelines from varying medical departments, for example for treatment of advanced solid tumors [ 7 ], spine surgery [ 8 , 9 ], urology [ 10 ], and diabetic ketoacidosis [ 11 , 12 ].…”
Section: Discussionmentioning
confidence: 99%
“…Already, several studies are exploring feasibility of integrating systems such as GPT to generate highquality responses to patient inquiries and aid clinical decisions-making across various medical specialties. 10,15,16 Of note, crafting effective prompts entailed iterative trial and error. The potential for performance variation underscores the importance of understanding the model's reliance on training data patterns and ensuring the relevance and quality of examples provided.…”
Section: Discussionmentioning
confidence: 99%
“…Already, several studies are exploring feasibility of integrating systems such as GPT to generate high-quality responses to patient inquiries and aid clinical decisions-making across various medical specialties. 10,15,16…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation