Correlation in the broadest sense is a measure of an association between variables. In correlated data, the change in the magnitude of 1 variable is associated with a change in the magnitude of another variable, either in the same (positive correlation) or in the opposite (negative correlation) direction. Most often, the term correlation is used in the context of a linear relationship between 2 continuous variables and expressed as Pearson product-moment correlation. The Pearson correlation coefficient is typically used for jointly normally distributed data (data that follow a bivariate normal distribution). For nonnormally distributed continuous data, for ordinal data, or for data with relevant outliers, a Spearman rank correlation can be used as a measure of a monotonic association. Both correlation coefficients are scaled such that they range from -1 to +1, where 0 indicates that there is no linear or monotonic association, and the relationship gets stronger and ultimately approaches a straight line (Pearson correlation) or a constantly increasing or decreasing curve (Spearman correlation) as the coefficient approaches an absolute value of 1. Hypothesis tests and confidence intervals can be used to address the statistical significance of the results and to estimate the strength of the relationship in the population from which the data were sampled. The aim of this tutorial is to guide researchers and clinicians in the appropriate use and interpretation of correlation coefficients.
Conventional cardiovascular monitoring may not detect tissue hypoxia, and conventional cardiovascular support aiming at global hemodynamics may not restore tissue oxygenation. NIRS offers non-invasive online monitoring of tissue oxygenation in a wide range of clinical scenarios. NIRS monitoring is commonly used to measure cerebral oxygenation (rSO2), e.g. during cardiac surgery. In this review, we will show that tissue hypoxia occurs frequently in the perioperative setting, particularly in cardiac surgery. Therefore, measuring and obtaining adequate tissue oxygenation may prevent (postoperative) complications and may thus be cost-effective. NIRS monitoring may also be used to detect tissue hypoxia in (prehospital) emergency settings, where it has prognostic significance and enables monitoring of therapeutic interventions, particularly in patients with trauma. However, optimal therapeutic agents and strategies for augmenting tissue oxygenation have yet to be determined.
Survival analysis, or more generally, time-to-event analysis, refers to a set of methods for analyzing the length of time until the occurrence of a well-defined end point of interest. A unique feature of survival data is that typically not all patients experience the event (eg, death) by the end of the observation period, so the actual survival times for some patients are unknown. This phenomenon, referred to as censoring, must be accounted for in the analysis to allow for valid inferences. Moreover, survival times are usually skewed, limiting the usefulness of analysis methods that assume a normal data distribution. As part of the ongoing series in Anesthesia & Analgesia, this tutorial reviews statistical methods for the appropriate analysis of time-to-event data, including nonparametric and semiparametric methods—specifically the Kaplan-Meier estimator, log-rank test, and Cox proportional hazards model. These methods are by far the most commonly used techniques for such data in medical literature. Illustrative examples from studies published in Anesthesia & Analgesia demonstrate how these techniques are used in practice. Full parametric models and models to deal with special circumstances, such as recurrent events models, competing risks models, and frailty models, are briefly discussed.
Anesthesia, critical care, perioperative, and pain research often involves study designs in which the same outcome variable is repeatedly measured or observed over time on the same patients. Such repeatedly measured data are referred to as longitudinal data, and longitudinal study designs are commonly used to investigate changes in an outcome over time and to compare these changes among treatment groups. From a statistical perspective, longitudinal studies usually increase the precision of estimated treatment effects, thus increasing the power to detect such effects. Commonly used statistical techniques mostly assume independence of the observations or measurements. However, values repeatedly measured in the same individual will usually be more similar to each other than values of different individuals and ignoring the correlation between repeated measurements may lead to biased estimates as well as invalid P values and confidence intervals. Therefore, appropriate analysis of repeated-measures data requires specific statistical techniques. This tutorial reviews 3 classes of commonly used approaches for the analysis of longitudinal data. The first class uses summary statistics to condense the repeatedly measured information to a single number per subject, thus basically eliminating within-subject repeated measurements and allowing for a straightforward comparison of groups using standard statistical hypothesis tests. The second class is historically popular and comprises the repeated-measures analysis of variance type of analyses. However, strong assumptions that are seldom met in practice and low flexibility limit the usefulness of this approach. The third class comprises modern and flexible regression-based techniques that can be generalized to accommodate a wide range of outcome data including continuous, categorical, and count data. Such methods can be further divided into so-called “population-average statistical models” that focus on the specification of the mean response of the outcome estimated by generalized estimating equations, and “subject-specific models” that allow a full specification of the distribution of the outcome by using random effects to capture within-subject correlations. The choice as to which approach to choose partly depends on the aim of the research and the desired interpretation of the estimated effects (population-average versus subject-specific interpretation). This tutorial discusses aspects of the theoretical background for each technique, and with specific examples of studies published in Anesthesia & Analgesia, demonstrates how these techniques are used in practice.
Invasive cardiac output (CO) monitoring, traditionally performed with transpulmonary thermodilution techniques, is usually reserved for high-risk patients because of the inherent risks of these methods. In contrast, transesophageal Doppler (TED) technology offers a safe, quick, and less invasive method for routine measurements of CO. After esophageal insertion and focusing of the probe, the Doppler beam interrogates the descending aortic blood flow. On the basis of the measured frequency shift between the emitted and received ultrasound frequency, blood flow velocity is determined. From this velocity, combined with the simultaneously measured systolic ejection time, CO and other advanced hemodynamic variables can be calculated, including estimations of preload, afterload, and contractility. Numerous studies have validated TED-derived CO against reference methods. Although the agreement of CO values between TED and the reference methods is limited (95% limits of agreement: median 4.2 L/min, interquartile range 3.3-5.0 L/min), TED has been shown to accurately follow changes of CO over time, making it a useful device for trend monitoring. TED can be used to guide perioperative intravascular volume substitution and therapy, with vasoactive or inotropic drugs. Various studies have demonstrated a reduced postoperative morbidity and shorter length of hospital stay in patients managed with TED compared with conventional clinical management, suggesting that it may be a valuable supplement to standard perioperative monitoring. We review not only the technical basis of this method and its clinical application but also its limitations, risks, and contraindications.
BackgroundPatients with severe traumatic brain injury (TBI) are at high risk for airway obstruction and hypoxia at the accident scene, and routine prehospital endotracheal intubation has been widely advocated. However, the effects on outcome are unclear. We therefore aim to determine effects of prehospital intubation on mortality and hypothesize that such effects may depend on the emergency medical service providers’ skill and experience in performing this intervention.Methods and FindingsPubMed, Embase and Web of Science were searched without restrictions up to July 2015. Studies comparing effects of prehospital intubation versus non-invasive airway management on mortality in non-paediatric patients with severe TBI were selected for the systematic review. Results were pooled across a subset of studies that met predefined quality criteria. Random effects meta-analysis, stratified by experience, was used to obtain pooled estimates of the effect of prehospital intubation on mortality. Meta-regression was used to formally assess differences between experience groups. Mortality was the main outcome measure, and odds ratios refer to the odds of mortality in patients undergoing prehospital intubation versus odds of mortality in patients who are not intubated in the field. The study was registered at the International Prospective Register of Systematic Reviews (PROSPERO) with number CRD42014015506. The search provided 733 studies, of which 6 studies including data from 4772 patients met inclusion and quality criteria for the meta-analysis. Prehospital intubation by providers with limited experience was associated with an approximately twofold increase in the odds of mortality (OR 2.33, 95% CI 1.61 to 3.38, p<0.001). In contrast, there was no evidence for higher mortality in patients who were intubated by providers with extended level of training (OR 0.75, 95% CI 0.52 to 1.08, p = 0.126). Meta-regression confirmed that experience is a significant predictor of mortality (p = 0.009).ConclusionsEffects of prehospital endotracheal intubation depend on the experience of prehospital healthcare providers. Intubation by paramedics who are not well skilled to do so markedly increases mortality, suggesting that routine prehospital intubation of TBI patients should be abandoned in emergency medical services in which providers do not have ample training, skill and experience in performing this intervention.
Is coronavirus disease 2019 (COVID-19) incubation time-based staffing of benefit with regard to reducing the number of infected health care workers (HCWs)? • Findings: Comprehensive statistical modeling reveals significant reduction of intensive care unit (ICU) staff shortage due to infection when both incubation and quarantine times of CO-VID-19 are considered. • Meaning: Scheduling ICU staff according to the epidemiological characteristics of a pandemic may reduce the number of infected staff and may increase the chances of operational functionality of health care facilities and systems. BACKGROUND: Health care worker (HCW) safety is of pivotal importance during a pandemic such as coronavirus disease 2019 (COVID-19), and employee health and well-being ensure functionality of health care institutions. This is particularly true for an intensive care unit (ICU), where highly specialized staff cannot be readily replaced. In the light of lacking evidence for optimal staffing models in a pandemic, we hypothesized that staff shortage can be reduced when staff scheduling takes the epidemiology of a disease into account. METHODS: Various staffing models were constructed, and comprehensive statistical modeling was performed. A typical routine staffing model was defined that assumed full-time employment (40 h/wk) in a 40-bed ICU with a 2:1 patient-to-staff ratio. A pandemic model assumed that staff worked 12-hour shifts for 7 days every other week. Potential in-hospital staff infections were simulated for a total period of 120 days, with a probability of 10%, 25%, and 40% being infected per week when at work. Simulations included the probability of infection at work for a given week, of fatality after infection, and the quarantine time, if infected. RESULTS: Pandemic-adjusted staffing significantly reduced workforce shortage, and the effect progressively increased as the probability of infection increased. Maximum effects were observed at week 4 for each infection probability with a 17%, 32%, and 38% staffing reduction for an infection probability of 0.10, 0.25, and 0.40, respectively. CONCLUSIONS: Staffing along epidemiologic considerations may reduce HCW shortage by leveling the nadir of affected workforce. Although this requires considerable efforts and commitment of staff, it may be essential in an effort to best maintain staff health and operational functionality of health care facilities and systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.