OBJECTIVE Use machine-learning (ML) algorithms to classify alerts as real or artifacts in online noninvasive vital sign (VS) data streams to reduce alarm fatigue and missed true instability. METHODS Using a 24-bed trauma step-down unit’s non-invasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral oximetry [SpO2]) recorded at 1/20Hz, and noninvasive oscillometric blood pressure [BP] less frequently, we partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were VS deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained ML algorithms. The best model was evaluated on alerts in the test set to enact online alert classification as signals evolve over time. MAIN RESULTS The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve (AUC) performance of 0.79 (95% CI 0.67-0.93) for SpO2 at the instant the VS first crossed threshold and increased to 0.87 (95% CI 0.71-0.95) at 3 minutes into the alerting period. BP AUC started at 0.77 (95%CI 0.64-0.95) and increased to 0.87 (95% CI 0.71-0.98), while RR AUC started at 0.85 (95%CI 0.77-0.95) and increased to 0.97 (95% CI 0.94–1.00). HR alerts were too few for model development. CONCLUSIONS ML models can discern clinically relevant SpO2, BP and RR alerts from artifacts in an online monitoring dataset (AUC>0.87).
Approximately 10-15% of persons living with HIV (PLWH) have a comorbid diagnosis of diabetes mellitus (DM). Both of these long-term chronic conditions are associated with high rates of symptom burden. The purpose of our study was to describe symptom patterns for PLWH with DM (PLWH+DM) using a large secondary dataset. The prevalence, burden, and bothersomeness of symptoms reported by patients in routine clinic visits during 2015 were assessed using the 20-item HIV Symptom Index. Principal component analysis was used to identify symptom clusters. Three main clusters were identified: (a) neurological/psychological, (b) gastrointestinal/flu-like, and (c) physical changes. The most prevalent symptoms were fatigue, poor sleep, aches, neuropathy, and sadness. When compared to a previous symptom study with PLWH, symptoms clustered differently in our sample of patients with dual diagnoses of HIV and diabetes. Clinicians should appropriately assess symptoms for their patients' comorbid conditions.
PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.
Feature selection techniques show promise towards reducing PHN documentation burden by identifying the most critical data elements needed to predict risk status. Further studies to refine the process of feature selection can aid in informing public health nurses' focus on client-specific and targeted interventions in the delivery of care.
Nurse practitioners may manage patients with coagulopathic bleeding which can lead to life-threatening hemorrhage. Routine plasma-based tests such as prothrombin time and activated partial thromboplastin time are inadequate in diagnosing hemorrhagic coagulopathy. Indiscriminate administration of fresh frozen plasma, platelets or cryoprecipitate for coagulopathic states can be extremely dangerous. The qualitative analysis that thromboelastography provides can facilitate the administration of the right blood product, at the right time, thereby permitting the application of goal-directed therapy for coagulopathic intervention application and patient survival.
This study explored the use of unsupervised machine learning to identify subgroups of patients with heart failure who used telehealth services in the home health setting, and examined intercluster differences for patient characteristics related to medical history, symptoms, medications, psychosocial assessments, and healthcare utilization. Using a feature selection algorithm, we selected seven variables from 557 patients for clustering. We tested three clustering techniques: hierarchical, k-means, and partitioning around medoids. Hierarchical clustering was identified as the best technique using internal validation methods. Intercluster differences among patient characteristics and outcomes were assessed with either χ test or one-way analysis of variance. Ranging in size from 153 to 233 patients, three clusters displayed patterns that differed significantly (P < .05) in patient characteristics of age, sex, medical history of comorbid conditions, use of beta blockers, and quality of life assessment. Significant (P < .001) intercluster differences in number of medications, comorbidities, and healthcare utilization were also revealed. The study identified patterns of association between (1) mental health status, pulmonary disorders, and obesity, and (2) healthcare utilization for patients with heart failure who used telehealth in the home health setting. Study results also revealed a lack of prescription guideline-recommended heart failure medications for the subgroup with the highest proportion of older female adults.
Background Infection with the Human Immunodeficiency Virus (HIV) dramatically increases the risk of developing active tuberculosis (TB). Several studies have indicated that co-infection with TB increases the risk of HIV progression and death. Sub-Saharan Africa bears the brunt of these dual epidemics, with about 2.4 million HIV-infected people living with TB. The main objective of our study was to assess whether the pre-HAART CD4+ T-lymphocyte counts and percentages could serve as biomarkers for post-HAART treatment immune-recovery in HIV-positive children with and without TB co-infection. Methods The data analyzed in this retrospective study were collected from a cohort of 305 HIV-infected children being treated with HAART. A Lehmann family of ROC curves were used to assess the diagnostic performance of pre- HAART treatment CD4+ T-lymphocyte count and percentage as biomarkers for post-HAART immune recovery. The Kaplan–Meier estimator was used to compare differences in post-HAART recovery times between patients with and without TB co-infection. Results We found that the diagnostic performance of both pre-HARRT treatment CD4+ T-lymphocyte count and percentage was comparable and achieved accuracies as high as 74%. Furthermore, the predictive capability of pre-HAART CD4+ T-lymphocyte count and percentage were slightly better in TB-negative patients. Our analyses also indicate that TB-negative patients have a shorter recovery time compared to the TB-positive patients. Conclusions Pre-HAART CD4+ T-lymphocyte count and percentage are stronger predictors of immune recovery in TB-negative pediatric patients, suggesting that TB co-infection complicates the treatment of HIV in this cohort. These findings suggest that the detection and treatment of TB is essential for the effectiveness of HAART in HIV-infected pediatric patients.
Background Patients undergoing continuous vital sign monitoring (heart rate [HR], respiratory rate [RR], pulse oximetry [SpO2]) in real time display inter-related vital sign changes during situations of physiologic stress. Patterns in this physiological cross-talk could portend impending cardiorespiratory instability (CRI). Vector autoregressive (VAR) modeling with Granger causality tests is one of the most flexible ways to elucidate underlying causal mechanisms in time series data. Purpose The purpose of this article is to illustrate development of patient-specific VAR models using vital sign time series (VSTS) data in a sample of acutely ill, monitored, step-down unit (SDU) patients, and determine their Granger causal dynamics prior to onset of an incident CRI. Approach CRI was defined as vital signs beyond stipulated normality thresholds (HR = 40–140/minute, RR = 8–36/minute, SpO2 < 85%) and persisting for 3 minutes within a 5-minute moving window (60% of the duration of the window). A 6-hour time segment prior to onset of first CRI was chosen for time series modeling in 20 patients using a six-step procedure: (a) the uniform time series for each vital sign was assessed for stationarity; (b) appropriate lag was determined using a lag-length selection criteria; (c) the VAR model was constructed; (d) residual autocorrelation was assessed with the Lagrange Multiplier test; (e) stability of the VAR system was checked; and (f) Granger causality was evaluated in the final stable model. Results The primary cause of incident CRI was low SpO2 (60% of cases), followed by out-of-range RR (30%) and HR (10%). Granger causality testing revealed that change in RR caused change in HR (21%) (i.e., RR changed before HR changed) more often than change in HR causing change in RR (15%). Similarly, changes in RR caused changes in SpO2 (15%) more often than changes in SpO2 caused changes in RR (9%). For HR and SpO2, changes in HR causing changes in SpO2 and changes in SpO2 causing changes in HR occurred with equal frequency (18%). Discussion Within this sample of acutely ill patients who experienced a CRI event, VAR modeling indicated that RR changes tend to occur before changes in HR and SpO2. These findings suggest that contextual assessment of RR changes as the earliest sign of CRI is warranted. Use of VAR modeling may be helpful in other nursing research applications based on time series data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.