The reported associations suggest that organ injury might occur when mean arterial pressure decreases <80 mm Hg for ≥10 min, and that this risk increases with blood pressures becoming progressively lower. Given the retrospective observational design of the studies reviewed, reflected by large variability in patient characteristics, hypotension definitions and outcomes, solid conclusions on which blood pressures under which circumstances are truly too low cannot be drawn. We provide recommendations for the design of future studies. CLINICAL REGISTRATION NUMBER: (PROSPERO ID). CRD42013005171.
Implementation of the WHO Surgical Checklist reduced in-hospital 30-day mortality. Although the impact on outcome was smaller than previously reported, the effect depended crucially upon checklist compliance.
There is no widely accepted definition of IOH. With varying definitions, many different incidences can be reproduced. This might have implications for previously described associations between IOH and adverse outcomes.
Independent of the type and extent of surgery, preoperative chronic pain and younger age were associated with higher postoperative pain. Females consistently reported slightly higher pain scores regardless of the type of surgery. The clinical significance of this small sex difference has to be analyzed in future studies.
An important aim of clinical prediction models is to positively impact clinical decision making and subsequent patient outcomes. The impact on clinical decision making and patient outcome can be quantified in prospective comparative-ideally cluster-randomized-studies, known as 'impact studies'. However, such impact studies often require a lot of time and resources, especially when they are (cluster-)randomized studies. Before envisioning such large-scale randomized impact study, it is important to ensure a reasonable chance that the use of the prediction model by the targeted healthcare professionals and patients will indeed have a positive effect on both decision making and subsequent outcomes. We recently performed two differently designed, prospective impact studies on a clinical prediction model to be used in surgical patients. Both studies taught us new valuable lessons on several aspects of prediction model impact studies, and which considerations may guide researchers in their decision to conduct a prospective comparative impact study. We provide considerations on how to prepare a prediction model for implementation in practice, how to present the model predictions, and how to choose the proper design for a prediction model impact study.
BackgroundWhen study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions.MethodsUsing an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated.ResultsThe model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept.ConclusionThe models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
What is already known about the topic?Pain assessment is the foundation of pain management when a patient is experiencing postoperative pain. A frequent and thorough assessment of patients' pain by registered nurses provides information to achieve optimal pain relief. Clinical guidelines are developed for postoperative pain management based on the patient's pain score. In these guidelines different cut-off points are used to treat the pain.
What this paper addsPatients and professionals do interpret the numeric rating scores for postoperative pain differently.
Combining probabilistic output of the model with their clinical experience may be difficult for physicians, especially when their decision-making process is mostly intuitive. Adding recommendations to predicted risks (directive approach) was considered an important step to facilitate the uptake of a prediction tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.