2023
DOI: 10.1038/s41591-023-02341-4
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT is not the solution to physicians’ documentation burden

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…The potential role for AI in crafting LORs has been previously suggested; however, this is the first study that demonstrates the feasibility of this application for AI 12,13 . Human‐authored letters have documented, problematic tendencies toward bias, and AI is suggested as a potential solution to reducing bias in LOR, though contrary arguments and data exist 3,11,14–18 . Although our study only examined biased language related to gender and was not powered to examine this aspect in detail, we observed signal of gender‐biased language in both human and AI‐authored LORs.…”
Section: Discussionmentioning
confidence: 68%
“…The potential role for AI in crafting LORs has been previously suggested; however, this is the first study that demonstrates the feasibility of this application for AI 12,13 . Human‐authored letters have documented, problematic tendencies toward bias, and AI is suggested as a potential solution to reducing bias in LOR, though contrary arguments and data exist 3,11,14–18 . Although our study only examined biased language related to gender and was not powered to examine this aspect in detail, we observed signal of gender‐biased language in both human and AI‐authored LORs.…”
Section: Discussionmentioning
confidence: 68%
“…Approximately 57.8% of the generated responses were assessed as accurate or nearly correct. This outcome underscores the imperative for exercising caution when solely relying on AI-generated medical information and the need for continuous evaluation, as others have noted [ 16 ]. However, in another study by Walker et al [ 17 ] aimed at evaluating the reliability of medical information provided by ChatGPT-4, multiple iterations of their queries executed through the model yielded a remarkable 100% internal consistency among the generated outputs [ 17 ].…”
Section: Discussionmentioning
confidence: 74%
“…The expansive datasets also show an emergent trend: while LLMs have found robust applications in patient communication, medical writing, and auxiliary diagnosis, there remains latent potential in realms such as medical education and training, especially in simulating patient-doctor interactions. 10 , 85 , 93 , 94 , 95 In addition, the challenges posed by linguistic and cultural nuances in LLMs underscore the importance of region-specific model training and data integration.…”
Section: Discussionmentioning
confidence: 99%