2024
DOI: 10.1067/j.cpradiol.2023.04.001
|View full text |Cite
|
Sign up to set email alerts
|

Ability of ChatGPT to generate competent radiology reports for distal radius fracture by use of RSNA template items and integrated AO classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
20
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 10 publications
5
20
0
Order By: Relevance
“…One relevant study has demonstrated limitations concerning accuracy of recommendations for IR procedures [15]. This result is similar to what we found in a previous study about the handling of technical and medical information in report drafting for distal radius fracture [16], [17]. For evaluating the ability of ChatGPT to handle the RSNA PICC line template [2], we defined 3 distinct cases and iterated those for a parameter study (n = 5).…”
Section: Introductionsupporting
confidence: 77%
See 3 more Smart Citations
“…One relevant study has demonstrated limitations concerning accuracy of recommendations for IR procedures [15]. This result is similar to what we found in a previous study about the handling of technical and medical information in report drafting for distal radius fracture [16], [17]. For evaluating the ability of ChatGPT to handle the RSNA PICC line template [2], we defined 3 distinct cases and iterated those for a parameter study (n = 5).…”
Section: Introductionsupporting
confidence: 77%
“…The methodology of the presented study follows the concept of the previous work [16]: cases were defined within the framework of a current RSNA template. ChatGPT was tasked with report drafting.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Accuracy. Several studies highlighted that ChatGPT exhibited inaccuracies when asked to respond to certain questions [14,18,23,29,32,34,35,38,43,50,52,53,64,65,67,71,72]. For instance, ChatGPT could respond with incomplete information or exhibit an inability to distinguish between truth and falsehood [21,69].…”
Section: Reliabilitymentioning
confidence: 99%