2023
DOI: 10.1002/wps.21145
|View full text |Cite
|
Sign up to set email alerts
|

The performance of ChatGPT in generating answers to clinical questions in psychiatry: a two‐layer assessment

Jurjen J. Luykx,
Frank Gerritse,
Philippe C. Habets
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 6 publications
(10 reference statements)
0
6
0
Order By: Relevance
“…Another similar use of ChatGPT was as an aid to answer clinical questions. A recent study evaluated the performance of users (psychiatrist and medical residents in the Netherlands) using ChatGPT as compared to nonusers for answering several questions in psychiatry, and it was observed that the users had better and faster responses as compared to nonusers [ 35 ]. Although these applications differ from this study, they might hint that ChatGPT currently has a database that holds relevant data in the field of psychiatry, which might explain the realism of scenarios and prompts observed for SCTs 2, 3, and 4.…”
Section: Discussionmentioning
confidence: 99%
“…Another similar use of ChatGPT was as an aid to answer clinical questions. A recent study evaluated the performance of users (psychiatrist and medical residents in the Netherlands) using ChatGPT as compared to nonusers for answering several questions in psychiatry, and it was observed that the users had better and faster responses as compared to nonusers [ 35 ]. Although these applications differ from this study, they might hint that ChatGPT currently has a database that holds relevant data in the field of psychiatry, which might explain the realism of scenarios and prompts observed for SCTs 2, 3, and 4.…”
Section: Discussionmentioning
confidence: 99%
“…Research indicates that ChatGPT excels in accuracy, completeness, nuance, and speed when generating responses to clinical inquiries in psychiatry [22]. Moreover, LLMs like ChatGPT play a pivotal role in automating the evaluation of medical literature, facilitating the identification of accurately reported research findings [23].…”
Section: Large Language Models In Medical Researchmentioning
confidence: 99%
“…OpenAI is not transparent on how GPT-4 was trained, so it is unclear whether scientific research, often behind paywalls, was included in the vast data sets that were integrated into its network during training [3]. For example, a study into ChatGPT's knowledge of clinical psychiatry found evidence for its promising accuracy, completeness, nuance, and speed, but also revealed a lack of pharmaceutical knowledge, which is typically found in textbooks rather than the web-based information ChatGPT was trained on [4].…”
Section: Potential Harmsmentioning
confidence: 99%