Background Predicting hospital length of stay (LoS) for patients with COVID-19 infection is essential to ensure that adequate bed capacity can be provided without unnecessarily restricting care for patients with other conditions. Here, we demonstrate the utility of three complementary methods for predicting LoS using UK national- and hospital-level data. Method On a national scale, relevant patients were identified from the COVID-19 Hospitalisation in England Surveillance System (CHESS) reports. An Accelerated Failure Time (AFT) survival model and a truncation corrected method (TC), both with underlying Weibull distributions, were fitted to the data to estimate LoS from hospital admission date to an outcome (death or discharge) and from hospital admission date to Intensive Care Unit (ICU) admission date. In a second approach we fit a multi-state (MS) survival model to data directly from the Manchester University NHS Foundation Trust (MFT). We develop a planning tool that uses LoS estimates from these models to predict bed occupancy. Results All methods produced similar overall estimates of LoS for overall hospital stay, given a patient is not admitted to ICU (8.4, 9.1 and 8.0 days for AFT, TC and MS, respectively). Estimates differ more significantly between the local and national level when considering ICU. National estimates for ICU LoS from AFT and TC were 12.4 and 13.4 days, whereas in local data the MS method produced estimates of 18.9 days. Conclusions Given the complexity and partiality of different data sources and the rapidly evolving nature of the COVID-19 pandemic, it is most appropriate to use multiple analysis methods on multiple datasets. The AFT method accounts for censored cases, but does not allow for simultaneous consideration of different outcomes. The TC method does not include censored cases, instead correcting for truncation in the data, but does consider these different outcomes. The MS method can model complex pathways to different outcomes whilst accounting for censoring, but cannot handle non-random case missingness. Overall, we conclude that data-driven modelling approaches of LoS using these methods is useful in epidemic planning and management, and should be considered for widespread adoption throughout healthcare systems internationally where similar data resources exist.
Missing data is endemic in much educational research. However, practices such as step-wise regression common in the educational research literature have been shown to be dangerous when significant data are missing, and multiple imputation (MI) is generally recommended by statisticians. In this paper, we provide a review of these advances and their implications for educational research. We illustrate the issues with an educational, longitudinal survey in which missing data was significant, but for which we were able to collect much of these missing data through subsequent data collection. We thus compare methods, that is, step-wise regression (basically ignoring the missing data) and MI models, with the model from the actual enhanced sample. The value of MI is discussed and the risks involved in ignoring missing data are considered. Implications for research practice are discussed.
We address the current concerns about teaching-to-the-test and its association with declining dispositions towards further study of mathematics and the consequences for choice of STEM subjects at university. In particular, through a mixed study including a large survey sample of over 1000 students and their teachers, and focussed qualitative case studies, we explored the impact of 'transmissionist' pedagogic practices on learning outcomes. We report on the construction and validation of a scale to measure teachers' self-reported pedagogy. We then use this measure in combination with the students' survey data and through regression modelling we illustrate significant associations between the pedagogic measure and students' mathematics dispositions. Finally, we discuss the potential implications of these results for mathematics education and the STEM agenda.
The Genetic Counselling Outcome Scale (GCOS-24) is a 24-item patient reported outcome measure for use in evaluations of genetic counselling and testing services. The aim of this study was to develop a short form of GCOS-24. The study comprised three phases. Phase I: Cognitive interviews were used to explore interpretability of GCOS-24 items and which GCOS-24 items were most valued by the target population. Phase II: The Graded Response Model was used to analyse an existing set of GCOS-24 responses (n= 395) to examine item discrimination. Phase III: Item Selection. Three principles guided the approach to item selection (i) Items with poor discriminative properties were not selected; (ii) To avoid redundancy, items capturing a similar outcome were not selected together; item information curves and cognitive interview findings were used to establish superior items. (iii) Rasch analysis was then used to determine the optimal scale. In Phase I, ten cognitive interviews were conducted with individuals affected by or at risk for a genetic condition, recruited from patient support groups. Analysis of interview transcripts identified twelve GCOS-24 items which were highly valued by participants. In Phase II, Graded Response Model item characteristic curves and item information curves were produced. In Phase III, findings from Phases I and II were used to select ten highly-valued items that perform well. Finally, items were iteratively removed and permutated to establish optimal fit statistics under the Rasch model. A six-item questionnaire with a five-point Likert Scale was produced (The Genomics Outcome Scale (GOS)). Correlation between GCOS-24 and GOS scores is high (r=.838 at 99% confidence), suggesting that GOS maintains the ability of GCOS-24 to capture empowerment, whilst providing a less burdensome scale for respondents. This study represents the first step in developing a preference-based measure which could be used in the evaluation of technologies and services used in genomic medicine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.