2023
DOI: 10.1101/2023.03.25.23285475
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Performance of ChatGPT as an AI-assisted decision support tool in medicine: a proof-of-concept study for interpreting symptoms and management of common cardiac conditions (AMSTELHEART-2)

Abstract: Background: It is thought that ChatGPT, an advanced language model developed by OpenAI, may in the future serve as an AI-assisted decision support tool in medicine. Objective: To evaluate the accuracy of ChatGPTs recommendations on medical questions related to common cardiac symptoms or conditions. Methods: We tested ChatGPTs ability to address medical questions in two ways. First, we assessed its accuracy in correctly answering cardiovascular trivia questions (n=50), based on quizzes for medical professionals… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 9 publications
1
3
0
1
Order By: Relevance
“…Although the overall accuracy of ChatGPT was lower than the standard passing threshold in academics of 70%, the demonstrated accuracy of ChatGPT was comparable to one other study of complex cardiac clinical vignettes which had 50% accuracy (50/100). Compared with experts, ChatGPT gave inaccurate or incomplete responses [ 29 ]. Even with these caveats, this proof-of-concept study illustrates the potential and perils of using ChatGPT for real-time decision support in clinical settings.…”
Section: Discussionmentioning
confidence: 99%
“…Although the overall accuracy of ChatGPT was lower than the standard passing threshold in academics of 70%, the demonstrated accuracy of ChatGPT was comparable to one other study of complex cardiac clinical vignettes which had 50% accuracy (50/100). Compared with experts, ChatGPT gave inaccurate or incomplete responses [ 29 ]. Even with these caveats, this proof-of-concept study illustrates the potential and perils of using ChatGPT for real-time decision support in clinical settings.…”
Section: Discussionmentioning
confidence: 99%
“…Of the 20 suggestions that scored the highest, 9 were generated by ChatGPT [38] examined the accuracy and reproducibility of ChatGPT in answering questions regarding knowledge, management, and emotional support for cirrhosis and hepatocellular carcinoma (HCC). In [39] it was found that ChatGPT correctly answered 74% of the trivia questions related to heart diseases. Specifically, the accuracy of ChatGPT scored impressively in the domains of coronary artery disease (80%), pulmonary and venous thrombotic embolism (80%), atrial fibrillation (70%), heart failure (80%) and cardiovascular risk management (60%) [40] evaluated ChatGPT as a support tool for breast tumor board decision making [41] assessed ChatGPT's capacity for clinical decision support in paediatrics [7] evaluated the capacity of ChatGPT as a clinical decision support in triaging patients for appropriate imaging services [42] did a comparative analysis of humans and LLMs in decision making abilities.…”
Section: Llm As a Decision Support Toolmentioning
confidence: 99%
“…Aktuelle Entwicklungen unter Verwendung von ChatGPT als KIgestütztes Entscheidungshilfe-Tool in der Medizin eröffnet Perspektiven [11], aktuell vor allem noch bei wenig komplexen medizinischen Fragen. Es sind jedoch weitere wissenschaftliche Untersuchungen erforderlich, um das Potenzial vollständig zu bewerten.…”
Section: Risikobewertung Und Entscheidungshilfenunclassified