Survival analysis is a branch of statistics that deals with both the tracking of time and the survival status simultaneously as the dependent response. Current comparisons of survival model performance mostly center on clinical data with classic statistical survival models, with prediction accuracy often serving as the sole metric of model performance. Moreover, survival analysis approaches for censored omics data have not been thoroughly investigated. The common approach is to binarize the survival time and perform a classification analysis. Here, we develop a benchmarking design, SurvBenchmark, that evaluates a diverse collection of survival models for both clinical and omics data sets. SurvBenchmark not only focuses on classical approaches such as the Cox model but also evaluates state-of-the-art machine learning survival models. All approaches were assessed using multiple performance metrics; these include model predictability, stability, flexibility, and computational issues. Our systematic comparison design with 320 comparisons (20 methods over 16 data sets) shows that the performances of survival models vary in practice over real-world data sets and over the choice of the evaluation metric. In particular, we highlight that using multiple performance metrics is critical in providing a balanced assessment of various models. The results in our study will provide practical guidelines for translational scientists and clinicians, as well as define possible areas of investigation in both survival technique and benchmarking strategies.
Extracellular protein disulfide isomerases (PDIs), including PDI, endoplasmic reticulum protein 57 (ERp57), ERp72, ERp46 and ERp5, are required for in vivo thrombus formation in mice. Platelets secrete PDIs upon activation, which regulate platelet aggregation. However, platelets secrete only ~10% of their PDI content extracellularly. The intracellular role of PDIs in platelet function is unknown. In the current study, we aimed to characterize the role of ERp5 (gene Pdia6) using platelet conditional knockout mice, platelet factor 4 (Pf4) Cre+/ERp5fl/fl. Pf4Cre+/ERp5fl/fl mice developed mild macrothrombocytopenia. Platelets deficient in ERp5 showed marked dysregulation of their ER, indicated by a 2-fold upregulation of ER proteins, including PDI, ERp57, ERp72, ERp46, 78 kDa glucose-regulated protein (GRP78) and calreticulin. ERp5 deficient platelets showed an enhanced ER stress response to ex vivo and in vivo ER stress inducers, with enhanced phosphorylation of eukaryotic translation initiation factor 2A (eIF2a) and inositol-requiring enzyme 1 (IRE1). ERp5 deficiency was associated with increased secretion of PDIs, an enhanced response to thromboxane A2 (TXA2) receptor activation, and increased thrombus formation in vivo. Our results support that ERp5 acts as negative regulator of ER stress responses in platelets, and highlights the importance of a disulfide isomerase in platelet ER homeostasis. The results also indicate a previously unanticipated role of platelet ER stress in platelet secretion and thrombosis. This may have important implications for therapeutic applications of ER stress inhibitors in thrombosis.
Survival analysis is a branch of statistics that deals with both, the tracking of time and of the survival status simultaneously as the dependent response. Current comparisons of the performance of survival models mostly focus on classical clinical data with traditional statistical survival models, with prediction accuracy being often the only measurement of model performance. Moreover, survival analysis approaches for censored omics data have not been fully studied. The typical solution is to truncate survival time, to define a new status variable, and to then perform a binary classification analysis.Here, we develop a benchmarking framework that compares survival models for both clinical datasets and omics datasets, and that not only focuses on classical statistical survival models but also incorporates state-of-art machine learning survival models with multiple performance evaluation measurements including model predictability, stability, flexibility and computational issues. Our comprehensive comparison framework shows that optimality is dataset and analysis method dependent. The key result is that there is no one size fits all solution for any of the criteria and any of the methods. Some methods with a high C-index suffer from computational exhaustion and instability. The implications of our framework give researchers an insight on how different survival model implementations vary over real world datasets. We highlight that care is needed when selecting methods and recommend specifically not to consider the C-index as the only performance evaluation metric as alternative metrics measure other performance aspects.Code availabilityhttps://github.com/SydneyBioX/SurvBenchmarkContactjean.yang@sydney.edu.au
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.