Dysphagia assessment and rehabilitation practice is complex, and significant variability in speech-language pathology approaches has been documented internationally. The aim of this study was to evaluate swallowing-related assessment and rehabilitation practices of SLPs currently working in Australia. One hundred and fifty-four SLPs completed an online questionnaire administered via QuickSurveys from May to July 2015. Results were analysed descriptively. The majority of clinicians had accessed post-graduate training in dysphagia management and assessment (66.23%). Referral and screening were typically on an ad hoc basis (74.03%). Clinical swallow examination (CSE) and Videofluoroscopic Swallowing Study were used by 93.51 and 88.31% of respondents, respectively. CSE was the assessment that predominantly informed clinical decision-making (52.63%). Clinicians typically treated clients with dysphagia for 30 min per session (46.10%), with recommendations of repetition of exercises inconsistent across settings. Outcome measures were utilised by many (67.53%), which however were typically informal. Results indicate variable practice patterns for dysphagia assessment and management across Australia. This variability may reflect the heterogeneous nature of dysphagia and the varying needs of patients accessing different services.
Background In the intensive care unit (ICU), delirium is a common, acute, confusional state associated with high risk for short- and long-term morbidity and mortality. Machine learning (ML) has promise to address research priorities and improve delirium outcomes. However, due to clinical and billing conventions, delirium is often inconsistently or incompletely labeled in electronic health record (EHR) datasets. Here, we identify clinical actions abstracted from clinical guidelines in electronic health records (EHR) data that indicate risk of delirium among intensive care unit (ICU) patients. We develop a novel prediction model to label patients with delirium based on a large data set and assess model performance. Methods EHR data on 48,451 admissions from 2001 to 2012, available through Medical Information Mart for Intensive Care-III database (MIMIC-III), was used to identify features to develop our prediction models. Five binary ML classification models (Logistic Regression; Classification and Regression Trees; Random Forests; Naïve Bayes; and Support Vector Machines) were fit and ranked by Area Under the Curve (AUC) scores. We compared our best model with two models previously proposed in the literature for goodness of fit, precision, and through biological validation. Results Our best performing model with threshold reclassification for predicting delirium was based on a multiple logistic regression using the 31 clinical actions (AUC 0.83). Our model out performed other proposed models by biological validation on clinically meaningful, delirium-associated outcomes. Conclusions Hurdles in identifying accurate labels in large-scale datasets limit clinical applications of ML in delirium. We developed a novel labeling model for delirium in the ICU using a large, public data set. By using guideline-directed clinical actions independent from risk factors, treatments, and outcomes as model predictors, our classifier could be used as a delirium label for future clinically targeted models.
Objective Unsupervised machine learning approaches hold promise for large-scale clinical data. However, the heterogeneity of clinical data raises new methodological challenges in feature selection, choosing a distance metric that captures biological meaning, and visualization. We hypothesized that clustering could discover prognostic groups from patients with chronic lymphocytic leukemia, a disease that provides biological validation through well-understood outcomes. Methods To address this challenge, we applied k-medoids clustering with 10 distance metrics to 2 experiments (“A” and “B”) with mixed clinical features collapsed to binary vectors and visualized with both multidimensional scaling and t-stochastic neighbor embedding. To assess prognostic utility, we performed survival analysis using a Cox proportional hazard model, log-rank test, and Kaplan-Meier curves. Results In both experiments, survival analysis revealed a statistically significant association between clusters and survival outcomes (A: overall survival, P = .0164; B: time from diagnosis to treatment, P = .0039). Multidimensional scaling separated clusters along a gradient mirroring the order of overall survival. Longer survival was associated with mutated immunoglobulin heavy-chain variable region gene (IGHV) status, absent Zap 70 expression, female sex, and younger age. Conclusions This approach to mixed-type data handling and selection of distance metric captured well-understood, binary, prognostic markers in chronic lymphocytic leukemia (sex, IGHV mutation status, ZAP70 expression status) with high fidelity.
Mutations of the IGH variable region in patients with chronic lymphocytic leukemia (CLL) are associated with a favorable prognosis. Cytogenetic complexity (>3 unrelated aberrations) and translocations have been associated with an unfavorable prognosis. While mutational status of IGHV is stable, cytogenetic aberrations frequently evolve. However, the relationships of these features as prognosticators at diagnosis are unknown. We examined the CpG-stimulated metaphase cytogenetic features detected within one year of diagnosis of CLL and correlated these features with outcome and other clinical features including IGHV. Of 329 untreated patients, 53 (16.1%) had a complex karyotype, and 85 (25.8%) had a translocation. Median time to first treatment (TFT) was 47 months. In univariable analyses, significant risk factors for shorter TFT (p<0.05) were Rai stage 3-4, beta2-microglobulin >3.5, log-transformed WBC, unmutated IGHV, complex karyotype, translocation, and FISH for trisomy 8, del(11q) and del(17p). In multivariable analysis, there was significant effect modification of IGHV status on the relationship between translocation and TFT (p=0.002). In IGHV mutated patients, those with a translocation had over 3.5 times higher risk of starting treatment than those without a translocation (p<0.001); however, in IGHV unmutated patients, a translocation did not significantly increase the risk of starting treatment (HR 1.00, p=0.99). Rai Stage 3-4, log-transformed WBC and complex karyotype remained statistically significant; however, del(17p) did not (p=0.51). In summary, the presence of a translocation in IGHV mutated patients appeared to negate the improved prognosis of mutated IGHV, but 4 the presence of a translocation did not have an effect on TFT in high-risk IGHV unmutated patients.
The Umpire 2.0 R-package offers a streamlined, user-friendly workflow to simulate complex, heterogeneous, mixed-type data with known subgroup identities, dichotomous outcomes, and time-to-event data, while providing ample opportunities for fine-tuning and flexibility. Here, we describe how we have expanded the core Umpire 1.0 R-package, developed to simulate gene expression data, to generate clinically realistic, mixed-type data for use in evaluating unsupervised and supervised machine learning (ML) methods. As the availability of large-scale clinical data for ML has increased, clinical data has posed unique challenges, including widely variable size, individual biological heterogeneity, data collection and measurement noise, and mixed data types. Developing and validating ML methods for clinical data requires data sets with known ground truth, generated from simulation. Umpire 2.0 addresses challenges to simulating realistic clinical data by providing the user a series of modules to generate survival parameters and subgroups, apply meaningful additive noise, and discretize to single or mixed data types. Umpire 2.0 provides broad functionality across sample sizes, feature spaces, and data types, allowing the user to simulate correlated, heterogeneous, binary, continuous, categorical, or mixed type data from the scale of a small clinical trial to data on thousands of patients drawn from electronic health records. The user may generate elaborate simulations by varying parameters in order to compare algorithms or interrogate operating characteristics of an algorithm in both supervised and unsupervised ML.
We present a novel model of time-series analysis to learn from electronic health record (EHR) data when infection occurred in the intensive care unit (ICU) by translating methods from proteomics and Bayesian statistics. Using 48,536 patients hospitalized in an ICU, we describe each hospital course as an 'alphabet' of 23 physician actions ('events') in temporal order. We analyze these as k-mers of length 3-12 events and apply a Bayesian model of (cumulative) relative risk (RR). The log2-transformed RR (median = 0.248, mean = 0.226) supported the conclusion that the events selected were individually associated with increased risk of infection. Selecting from all possible cutoffs of maximum gain (MG), MG > 0.0244 predicts administration of antibiotics with PPV 82.0 %, NPV 44.4 %, and AUC 0.706. Our approach holds value for retrospective analysis of other clinical syndromes for which time-of-onset is critical to analysis but poorly marked in EHRs, including delirium and decompensation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.