In 40 of 50 US states, scheduled dialysis is withheld from undocumented immigrants with end-stage renal disease (ESRD); instead, they receive intermittent emergency-only dialysis to treat life-threatening manifestations of ESRD. However, the comparative effectiveness of scheduled dialysis vs emergency-only dialysis and the influence of treatment on health outcomes, utilization, and costs is uncertain. OBJECTIVE To compare the effectiveness of scheduled vs emergency-only dialysis with regard to health outcomes, utilization, and costs in undocumented immigrants with ESRD. DESIGN, SETTING, AND PARTICIPANTS Observational cohort study of 181 eligible adults with ESRD receiving emergency-only dialysis in Dallas, Texas, who became newly eligible and applied for private commercial health insurance in February 2015; 105 received coverage and were enrolled in scheduled dialysis; 76 were not enrolled in insurance for nonclinical reasons (eg, lack of capacity at a participating outpatient dialysis center) and remained uninsured, receiving emergency-only dialysis. We examined data on eligible persons during a 6-month period prior to enrollment (baseline period,
Background Incorporating clinical information from the full hospital course may improve prediction of 30-day readmissions. Objective To develop an all-cause readmissions risk-prediction model incorporating electronic health record (EHR) data from the full hospital stay, and to compare “full-stay” model performance to a “first day” and 2 other validated models, LACE (includes Length of stay, Acute [non-elective] admission status, Charlson Comorbidity Index, and Emergency department visits in the past year), and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type [nonelective], number of Admissions in the past year, and Length of stay). Design Observational cohort study. Subjects All medicine discharges between November 2009 and October 2010 from 6 hospitals in North Texas, including safety net, teaching, and nonteaching sites. Measures Thirty-day nonelective readmissions were ascertained from 75 regional hospitals. Results Among 32,922 admissions (validation = 16,430), 12.7% were readmitted. In addition to many first-day factors, we identified hospital-acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI]: 1.18-3.48), vital sign instability on discharge (AOR: 1.25, 95% CI: 1.15-1.36), hyponatremia on discharge (AOR: 1.34, 95% CI: 1.18-1.51), and length of stay (AOR: 1.06, 95% CI: 1.04-1.07) as significant predictors. The full-stay model had better discrimination than other models though the improvement was modest (C statistic 0.69 vs 0.64-0.67). It was also modestly better in identifying patients at highest risk for readmission (likelihood ratio +2.4 vs. 1.8–2.1) and in reclassifying individuals (net reclassification index 0.02–0.06). Conclusions Incorporating clinically granular EHR data from the full hospital stay modestly improves prediction of 30-day readmissions. Given limited improvement in prediction despite incorporation of data on hospital complications, clinical instabilities, and trajectory, our findings suggest that many factors influencing readmissions remain unaccounted for. Further improvements in readmission models will likely require accounting for psychosocial and behavioral factors not currently captured by EHRs.
Background Although timely treatment of sepsis improves outcomes, delays in administering evidence-based therapies are common. Purpose To determine whether automated real-time electronic sepsis alerts can: 1) accurately identify sepsis, and 2) improve process measures and outcomes. Data Sources We systematically searched MEDLINE, Embase, The Cochrane Library, and CINAHL from database inception through June 27, 2014. Study Selection Included studies that empirically evaluated one or both of the prespecified objectives. Data Extraction Two independent reviewers extracted data and assessed the risk of bias. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive (PPV) and negative predictive values (NPV) and likelihood ratios (LR). Effectiveness was assessed by changes in sepsis care process measures and outcomes. Data Synthesis Of 1,293 citations, 8 studies met inclusion criteria, 5 for the identification of sepsis (n=35,423) and 5 for the effectiveness of sepsis alerts (n=6,894). Though definition of sepsis alert thresholds varied, most included systemic inflammatory response syndrome criteria ± evidence of shock. Diagnostic accuracy varied greatly, with PPV ranging from 20.5-53.8%, NPV 76.5-99.7%; LR+ 1.2-145.8; and LR- 0.06-0.86. There was modest evidence for improvement in process measures (i.e., antibiotic escalation), but only among patients in non-critical care settings; there were no corresponding improvements in mortality or length of stay. Minimal data were reported on potential harms due to false positive alerts. Conclusions Automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor positive predictive value and do not improve mortality or length of stay.
IMPORTANCE Despite providing an overlapping level of care, it is unknown why hospitalized older adults are transferred to long-term acute care hospitals (LTACs) vs less costly skilled nursing facilities (SNFs) for postacute care. OBJECTIVE To examine factors associated with variation in LTAC vs SNF transfer among hospitalized older adults. DESIGN, SETTING, AND PARTICIPANTS We conducted this retrospective observational cohort study of hospitalized older adults (≥65 years) transferred to an LTAC vs SNF during fiscal year 2012 using national 5% Medicare data. MAIN OUTCOMES AND MEASURES Predictors of LTAC transfer were assessed using a multilevel mixed-effects model adjusting for patient-, hospital-, and region-level factors. We estimated variation partition coefficients and adjusted hospital- and region-specific LTAC transfer rates using sequential models. RESULTS Among 65 525 hospitalized older adults (42 461 [64.8%] women; 39 908 [60.9%] ≥85 years) transferred to an LTAC or SNF, 3093 (4.7%) were transferred to an LTAC. We identified 29 patient-, 3 hospital-, and 5 region-level independent predictors. The strongest predictors of LTAC transfer were receiving a tracheostomy (adjusted odds ration [aOR], 23.8; 95% CI, 15.8–35.9) and being hospitalized in close proximity to an LTAC (0–2 vs >42 miles; aOR, 8.4, 95% CI, 6.1–11.5). After adjusting for case-mix, differences between patients explained 52.1% (95% CI, 47.7%–56.5%) of the variation in LTAC use. The remainder was attributable to hospital (15.0%; 95% CI, 12.3%–17.6%), and regional differences (32.9%; 95% CI, 27.6%–38.3%). Case-mix adjusted LTAC use was very high in the South (17%–37%) compared with the Pacific Northwest, North, and Northeast (<2.2%). From the full multilevel model, the median adjusted hospital LTAC transfer rate was 2.1%(10th–90th percentile, 0.24%–10.8%). Even within a region, adjusted hospital LTAC transfer rates varied substantially (intraclass correlation coefficient [ICC], 0.26; 95% CI, 0.23–0.30). CONCLUSIONS AND RELEVANCE Although many patient-level factors were associated with LTAC use, half of the variation in LTAC vs SNF transfer is independent of patients’ illness severity or clinical complexity, and is explained by where the patient was hospitalized and in what region, with far greater use in the South. Even among hospitals in regions with similar LTAC access, there was considerable variation in LTAC use. Given the higher expense associated with LTACs vs SNFs, greater attention is needed to define the optimal role of LTACs in the postacute care of older adults.
Background Respiratory rate (RR) is an independent predictor of adverse outcomes and an integral component of many risk prediction scores for hospitalised adults. Yet, it is unclear if RR is recorded accurately. We sought to assess the potential accuracy of RR by analysing the distribution and variation as a proxy, since RR should be normally distributed if recorded accurately. Methods We conducted a descriptive observational study of electronic health record data from consecutive hospitalisations from 2009 to 2010 from six diverse hospitals. We assessed the distribution of the maximum RR on admission, using heart rate (HR) as a comparison since this is objectively measured. We assessed RR patterns among selected subgroups expected to have greater physiological variation using the coefficient of variation (CV=SD/mean). Results Among 36 966 hospitalisations, recorded RR was not normally distributed (p<0.001), but right skewed (skewness=3.99) with values clustered at 18 and 20 (kurtosis=23.9). In contrast, HR was relatively normally distributed. Patients with a cardiopulmonary diagnosis or hypoxia only had modestly greater variation (CV increase of 2%–6%). Among 1318 patients transferred from the ward to the intensive care unit (n=1318), RR variation the day preceding transfer was similar to that observed on admission (CV 0.24 vs 0.26), even for those transferred with respiratory failure (CV 0.25). Conclusions The observed patterns suggest that RR is inaccurately recorded, even among those with cardiopulmonary compromise, and represents a ‘spot’ estimate with values of 18 and 20 breaths per minute representing ‘normal.’ While spot estimates may potentially be adequate to indicate clinical stability, inaccurate RR may alternatively lead to misclassification of disease severity, potentially jeopardising patient safety. Thus, we recommend greater training for hospital personnel to accurately record RR.
BackgroundDespite considerable financial incentives for adoption, there is little evidence available about providers’ use and satisfaction with key functions of electronic health records (EHRs) that meet “meaningful use” criteria.MethodsWe surveyed primary care providers (PCPs) in 11 general internal medicine and family medicine practices affiliated with 3 health systems in Texas about their use and satisfaction with performing common tasks (documentation, medication prescribing, preventive services, problem list) in the Epic EHR, a common commercial system. Most practices had greater than 5 years of experience with the Epic EHR. We used multivariate logistic regression to model predictors of being a structured documenter, defined as using electronic templates or prepopulated dot phrases to document at least two of the three note sections (history, physical, assessment and plan).Results146 PCPs responded (70%). The majority used free text to document the history (51%) and assessment and plan (54%) and electronic templates to document the physical exam (57%). Half of PCPs were structured documenters (55%) with family medicine specialty (adjusted OR 3.3, 95% CI, 1.4-7.8) and years since graduation (nonlinear relationship with youngest and oldest having lowest probabilities) being significant predictors. Nearly half (43%) reported spending at least one extra hour beyond each scheduled half-day clinic completing EHR documentation. Three-quarters were satisfied with documenting completion of pneumococcal vaccinations and half were satisfied with documenting cancer screening (57% for breast, 45% for colorectal, and 46% for cervical). Fewer were satisfied with reminders for overdue pneumococcal vaccination (48%) and cancer screening (38% for breast, 37% for colorectal, and 31% for cervical). While most believed the problem list was helpful (70%) and kept an up-to-date list for their patients (68%), half thought they were unreliable and inaccurate (51%).ConclusionsDissatisfaction with and suboptimal use of key functions of the EHR may mitigate the potential for EHR use to improve preventive health and chronic disease management. Future work should optimize use of key functions and improve providers’ time efficiency.
Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions.
EHR data collected from the entire hospitalization can accurately predict readmission risk among patients hospitalized for pneumonia. This approach outperforms a first-day pneumonia-specific model, the Centers for Medicare and Medicaid Services pneumonia model, and 2 commonly used pneumonia severity of illness scores. Journal of Hospital Medicine 2017;12:209-216.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.