2020
DOI: 10.1371/journal.pone.0231172
|View full text |Cite
|
Sign up to set email alerts
|

Development of a prediction model for hypotension after induction of anesthesia using machine learning

Abstract: Arterial hypotension during the early phase of anesthesia can lead to adverse outcomes such as a prolonged postoperative stay or even death. Predicting hypotension during anesthesia induction is complicated by its diverse causes. We investigated the feasibility of developing a machine-learning model to predict postinduction hypotension. Naïve Bayes, logistic regression, random forest, and artificial neural network models were trained to predict postinduction hypotension, occurring between tracheal intubation a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
34
1
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 46 publications
(37 citation statements)
references
References 12 publications
1
34
1
1
Order By: Relevance
“…The remaining 5169 patients are all added to the validation cohort Sensitivity and specificity, discriminatory power and 95% confidence interval, number of tests avoided, negative predictive value, positive predictive value Not mentioned Isma'eel et al [ 42 ] The derivation cohort was randomly chosen 30 out of the 59 patients who tested positive were added randomly to the derivation cohort, and 30 out of the remaining 427 patients who tested negative were also added randomly to the derivation cohort. The remaining 426 patients (29 positive, 397 negative) were all added to the testing cohort; during the training phase, the 60 patients that are used for training were split 80% for pure training and 20% for validation Negative and positive predictive values, descriminatory power, percentage of avoided tests, sensitivity and specificity Not mentioned Jovanovic et al [ 43 ] The sample was randomly divided into 3 parts: training, testing, and validation sample Area under the receiver operating curve, sensitivity, specificity, and positive and negative predictive values Not mentioned Kang et al [ 27 ] Four-fold cross validation, 75/25 split (training/validation) Area under the receiver operating curve, accuracy, precision, recall Not mentioned Karhade et al [ 28 ] tenfold cross validation Discrimination (c-statistic or area under the receiver operating curve), calibration (calibration slope, calibration intercept), and overall performance (Brier score) Multiple imputation with the missForest methodology was undertaken for variables with less than 30% missing data Kebede et al [ 29 ] 10% cross validation, 90/10 split (training/testing) Area under the receiver operating curve; classification accuracy-true positive, false positive, precision, recall If information is incomplete, un-readable or their manual record is lost, patients were excluded from the study Khanji et al [ 47 ] Ten-fold cross validation Akaike Information Criterion, area under the receiver operating curve Excluded patients with missing data at the end of the study (± 6 months) Kim et al [ 30 ] …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The remaining 5169 patients are all added to the validation cohort Sensitivity and specificity, discriminatory power and 95% confidence interval, number of tests avoided, negative predictive value, positive predictive value Not mentioned Isma'eel et al [ 42 ] The derivation cohort was randomly chosen 30 out of the 59 patients who tested positive were added randomly to the derivation cohort, and 30 out of the remaining 427 patients who tested negative were also added randomly to the derivation cohort. The remaining 426 patients (29 positive, 397 negative) were all added to the testing cohort; during the training phase, the 60 patients that are used for training were split 80% for pure training and 20% for validation Negative and positive predictive values, descriminatory power, percentage of avoided tests, sensitivity and specificity Not mentioned Jovanovic et al [ 43 ] The sample was randomly divided into 3 parts: training, testing, and validation sample Area under the receiver operating curve, sensitivity, specificity, and positive and negative predictive values Not mentioned Kang et al [ 27 ] Four-fold cross validation, 75/25 split (training/validation) Area under the receiver operating curve, accuracy, precision, recall Not mentioned Karhade et al [ 28 ] tenfold cross validation Discrimination (c-statistic or area under the receiver operating curve), calibration (calibration slope, calibration intercept), and overall performance (Brier score) Multiple imputation with the missForest methodology was undertaken for variables with less than 30% missing data Kebede et al [ 29 ] 10% cross validation, 90/10 split (training/testing) Area under the receiver operating curve; classification accuracy-true positive, false positive, precision, recall If information is incomplete, un-readable or their manual record is lost, patients were excluded from the study Khanji et al [ 47 ] Ten-fold cross validation Akaike Information Criterion, area under the receiver operating curve Excluded patients with missing data at the end of the study (± 6 months) Kim et al [ 30 ] …”
Section: Resultsmentioning
confidence: 99%
“…As noted above, one study using decision tree analysis used Quinlan’s C5.0 decision tree algorithm [ 15 ] while a second used an earlier version of this program (C4.5) [ 20 ]. Other decision tree analyses utilized various versions of R [ 18 , 19 , 22 , 24 , 27 , 47 ], International Business Machines (IBM) Statistical Package for the Social Sciences (SPSS) [ 16 , 17 , 33 , 47 ], the Azure Machine Learning Platform [ 30 ], or programmed the model using Python [ 23 , 25 , 46 ]. Artificial neural network analyses used Neural Designer [ 34 ] or Statistica V10 [ 35 ].…”
Section: Resultsmentioning
confidence: 99%
“…In our future work, we will check the feasibility of the random forest model for large number of patients. We have previously described the development of a prediction model using an ANN [ 10 ]. Our previous model is comparable to Kendale et al [ 8 ], and achieves a better result in a similar condition.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, data analyzed in our study came from mechanical ventilators and anesthetic workstation that have been overlooked in most literature in the past. In recent years, research on prediction models using machine learning has been steadily published in several clinical fields, such as arrhythmia, postoperative mortality, morbidity, and hypotension [ 10 , 21 , 31 , 32 , 33 , 34 ]. This trend clearly leads to many advantages, such as the development of the medical environment, improved patient safety, and improved prognosis.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation