2022
DOI: 10.3233/shti210882
|View full text |Cite
|
Sign up to set email alerts
|

Length of Stay Prediction in Neurosurgery with Russian GPT-3 Language Model Compared to Human Expectations

Abstract: Patients, relatives, doctors, and healthcare providers anticipate the evidence-based length of stay (LOS) prediction in neurosurgery. This study aimed to assess the quality of LOS prediction with the GPT3 language model upon the narrative medical records in neurosurgery comparing to doctors’ and patients’ expectations. We found no significant difference (p = 0.109) between doctors’, patients’, and model’s predictions with neurosurgeons tending to be more accurate in prognosis. The modern neural network languag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 3 publications
0
7
0
Order By: Relevance
“…Burdenko National Medical Research Center of Neurosurgery for the period between 2000 and 2017. Data preprocessing and modeling details were thoroughly presented in our previous work (2). We fine-tuned the Generative Pre-trained Transformer 3 model previously trained with large (600+ GB) corpus in the Russian language (ruGPT3) in two stages of 22 epochs totally, using the batch size of top level = 256, overall batch size = 16, learning rate = 1e-5 with L1LOSS loss function, Adam optimizer.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Burdenko National Medical Research Center of Neurosurgery for the period between 2000 and 2017. Data preprocessing and modeling details were thoroughly presented in our previous work (2). We fine-tuned the Generative Pre-trained Transformer 3 model previously trained with large (600+ GB) corpus in the Russian language (ruGPT3) in two stages of 22 epochs totally, using the batch size of top level = 256, overall batch size = 16, learning rate = 1e-5 with L1LOSS loss function, Adam optimizer.…”
Section: Methodsmentioning
confidence: 99%
“…The evidence-based prognosis of hospital stay might be valuable for patients, relatives, doctors, and healthcare providers in high-tech surgery (1). We have recently evaluated the performance of the GPT3 neural network model pre-trained on a massive amount of narrative texts in the Russian language and fine-tuned on the neurosurgical dataset from Electronic Health Records (EHR) to predict the inpatient length of stay (LOS) in neurosurgery (2). We have earlier demonstrated the feasibility of using unstructured medical texts in such a prediction task (3,4).…”
Section: Introductionmentioning
confidence: 99%
“…1,2 They have been used for a wide range of applications in health care, including predicting length of postsurgical hospital stay, captioning medical images, summarizing radiology reports, and named entity recognition of electronic health record notes. [3][4][5][6] Among these models, ChatGPT (OpenAI) has emerged as a particularly powerful tool based on GPT-3.5 that was designed specifically for the task of generating natural and contextually appropriate responses in a conversational setting. Building on the GPT-3 model, GPT-3.5 was trained on a larger corpus of textual data and with additional training techniques like Reinforcement Learning from Human Feedback (RLHF), which incorporates human knowledge and expertise into the model.…”
Section: Introductionmentioning
confidence: 99%
“…These models, including bidirectional encoder representations from transformers (BERT) and generative pretrained transformer 3 (GPT-3), are trained on massive amounts of text data and excel at natural language processing tasks such as text summarization or responding to queries . They have been used for a wide range of applications in health care, including predicting length of postsurgical hospital stay, captioning medical images, summarizing radiology reports, and named entity recognition of electronic health record notes …”
Section: Introductionmentioning
confidence: 99%
“…One notable example is the Generative Pretrained Transformer 3 (GPT-3), which has been extensively pre-trained on massive amounts of text data, allowing it to analyze and generate text in various healthcare domains. These LLMs have been found wide applications in the healthcare field through pre-training on a vast amount of textual data and abstract analysis of texts, such as predicting postoperative hospitalization time and generating electronic health records [1,2]. This indicates the potential applications of LLMs in clinical [3,4,5,6,7], educational [8,9,10], and research settings [11,12].…”
Section: Introductionmentioning
confidence: 99%