2023
DOI: 10.1038/s41391-023-00705-y
|View full text |Cite
|
Sign up to set email alerts
|

Quality of information and appropriateness of ChatGPT outputs for urology patients

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(18 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…Readability calculation (Table 1) utilized the following validated formulae: Flesch-Kincaid Reading Ease Score, Flesch-Kincaid Grade Level, Gunning-Fog Score, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index. 1,20,21…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Readability calculation (Table 1) utilized the following validated formulae: Flesch-Kincaid Reading Ease Score, Flesch-Kincaid Grade Level, Gunning-Fog Score, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index. 1,20,21…”
Section: Methodsmentioning
confidence: 99%
“…Two blinded urologists scored responses for accuracy (“Is the response evidence-based and medically accurate?”), comprehensiveness ("Does the response provide sufficient information to fully inform patients about their diagnosis/treatment?”), and understandability (“Can the response be easily understood by average patients?”) using Likert items (1 = Poor, 2 = Needs Improvement, 3 = Fair, 4 = Good, 5 = Excellent). 1,20-24 Accuracy was assessed using AUA guidelines. Comprehensiveness and understandability were scored qualitatively.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…1 Our findings highlighted that this technology is capable of providing medical informationdeven if not fully comprehensive and correctdconfirming the findings of other studies in urology. 2,3 The risk is that in future this technology could potentially replace the more commonly used "Dr Google" 4 to gather health care information. The human-like interaction that artificial intelligence (AI)epowered chatbots provide could inspire trust in patients, and therefore be widely used for a range of different health care questions, from diagnosis to treatments.…”
Section: Evaluating the Effectiveness Of Artificialmentioning
confidence: 99%