2015
DOI: 10.1093/jamia/ocv051
|View full text |Cite
|
Sign up to set email alerts
|

National Veterans Health Administration inpatient risk stratification models for hospital-acquired acute kidney injury

Abstract: This study demonstrated that, although all the models tested had good discrimination, performance characteristics varied between methods, and the random forests models did not calibrate as well as the lasso or logistic regression models. In addition, novel modifiable risk factors were explored and found to be significant.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(44 citation statements)
references
References 66 publications
0
41
1
Order By: Relevance
“…In AKI predictive modeling, logistic regression with backward or forward selection (wrapper method) is often used to select a subset of features for model building 7 ; chi-squared test (filter) 8 , random forest (embedded) 9 , and gradient boosting machine (embedded) 10 have also been applied to illustrate the feature importance and ranking in AKI prediction. With the increasing variety of feature selection methods and their frequent utilization in the health informatics research community, new questions arise, namely there is no systematic way to choose the most appropriate feature selection method for a given domain and problem, which often depends on two aspects 11 : (a) the stability of FS ranking with respect to different samples, and (b) the prediction accuracy of FS subset effectively representing the entire data.…”
Section: Introductionmentioning
confidence: 99%
“…In AKI predictive modeling, logistic regression with backward or forward selection (wrapper method) is often used to select a subset of features for model building 7 ; chi-squared test (filter) 8 , random forest (embedded) 9 , and gradient boosting machine (embedded) 10 have also been applied to illustrate the feature importance and ranking in AKI prediction. With the increasing variety of feature selection methods and their frequent utilization in the health informatics research community, new questions arise, namely there is no systematic way to choose the most appropriate feature selection method for a given domain and problem, which often depends on two aspects 11 : (a) the stability of FS ranking with respect to different samples, and (b) the prediction accuracy of FS subset effectively representing the entire data.…”
Section: Introductionmentioning
confidence: 99%
“…Time of Prediction Most of the past work had chosen a particular time for making predictions, say 24 hours from admission [14,15], or 48 hours from admission [16,17], or even at the time of admission [33], while some of the past work were not clear about it [34,35]. In the rest of the paper, we call a prediction model built to make predictions at 24 hours from admission as one-time-at-24-hour prediction model.…”
Section: 31mentioning
confidence: 99%
“…Note that every positive laboratory coefficient contains a maximum and every negative a minimum. Comparison with features from Cronin et al [16] We can compare our features to those in Cronin et al [16], where a random forest was used to predict AKI stage 1+ (KDIGO stages 1, 2, or 3). In Cronin et al, we see strong dependence on renal indicators (e.g., GFR, UN), labs indirectly associated with renal function (Hemoglobin), heart failure, diuretics (loop, thiazides), and anti-hypertensives such as angiotensin-converting enzyme inhibitors (ACEi), which is also reflected in our findings.…”
Section: Hplr1mentioning
confidence: 99%
“…Conversely, prior hospitalizations might be renal stressors, diminishing renal reserve. Most previous studies on AKI posit data models, although some more recent work [16,27] explores predictive algorithms, distinct from data models [32], as done here. Penalized regression and ensemble methods were employed to mitigate overfitting.…”
mentioning
confidence: 99%