Several heart failure (HF) web-based risk scores are currently used in clinical practice. Currently, we lack head-to-head comparison of the accuracy of risk scores. This study aimed to assess correlation and mortality prediction performance of Meta-Analysis Global Group in Chronic Heart Failure (MAGGIC-HF) risk score, which includes clinical variables + medications; Seattle Heart Failure Model (SHFM), which includes clinical variables + treatments + analytes; PARADIGM Risk of Events and Death in the Contemporary Treatment of Heart Failure (PREDICT-HF) and Barcelona Bio-Heart Failure (BCN-Bio-HF) risk calculator, which also include biomarkers, like N-terminal pro B-type natriuretic peptide (NT-proBNP).
Learning algorithms for energy based Boltzmann architectures that rely on gradient descent are in general computationally prohibitive, typically due to the exponential number of terms involved in computing the partition function. In this way one has to resort to approximation schemes for the evaluation of the gradient. This is the case of Restricted Boltzmann Machines (RBM) and its learning algorithm Contrastive Divergence (CD). It is well-known that CD has a number of shortcomings, and its approximation to the gradient has several drawbacks. Overcoming these defects has been the basis of much research and new algorithms have been devised, such as persistent CD. In this manuscript we propose a new algorithm that we call Weighted CD (WCD), built from small modifications of the negative phase in standard CD. However small these modifications may be, experimental work reported in this paper suggest that WCD provides a significant improvement over standard CD and persistent CD at a small additional computational cost.
The complexity of resource usage and power consumption on cloud-based applications makes the understanding of application behavior through expert examination difficult. The difficulty increases when applications are seen as "black boxes", where only external monitoring can be retrieved. Furthermore, given the different amount of scenarios and applications, automation is required. Here we examine and model application behavior by finding behavior phases. We use Conditional Restricted Boltzmann Machines (CRBM) to model time-series containing resources traces measurements like CPU, Memory and IO. CRBMs can be used to map a given given historic window of trace behaviour into a single vector. This low dimensional and time-aware vector can be passed through clustering methods, from simplistic ones like k-means to more complex ones like those based on Hidden Markov Models (HMM). We use these methods to find phases of similar behaviour in the workloads. Our experimental evaluation shows that the proposed method is able to identify different phases of resource consumption across different workloads. We show that the distinct phases contain specific resource patterns that distinguish them.
Tuning configurations of Spark jobs is not a trivial task. State-of-the-art auto-tuning systems are based on iteratively running workloads with different configurations. During the optimization process, the relevant features are explored to find good solutions. Many optimizers enhance the time-to-solution using black-box optimization algorithms that do not take into account any information from the Spark workloads. In this paper, we present a new method for tuning configurations that uses information from one run of a Spark workload. To achieve good performance, we mine the SparkEventLog that is generated by the Spark engine. This log file contains a large amount of information from the executed application. We use this information to enhance a performance model with low-level features from the workload to be optimized. These features include Spark Actions, Transformations, and Task metrics. This process allows us to obtain application-specific workload information. With this information our system can predict sensible Spark configurations for unseen jobs, given that it has been trained with reasonable coverage of Spark applications. Experiments show that the presented system correctly produces good configurations, while achieving up to 80% speedup with respect to the default Spark configuration, and up to 12x speedup of the time-to-solution with respect to a standard Bayesian Optimization procedure.
ObjectivesHeart failure (HF) management has significantly improved over the past two decades, leading to better survival. This study aimed to assess changes in predicted mortality risk after 12 months of management in a multidisciplinary HF clinic.Materials and MethodsOut of 1,032 consecutive HF outpatients admitted from March-2012 to November-2018, 357 completed the 12-months follow-up and had N-terminal pro-B-type natriuretic peptide (NTproBNP), high sensitivity troponin T (hs-TnT), and interleukin-1 receptor-like-1 (known as ST2) measurements available both at baseline and follow-up. Three contemporary risk scores were used: MAGGIC-HF, Seattle HF Model (SHFM), and the Barcelona Bio-HF (BCN Bio-HF) calculator, which incorporates the three above mentioned biomarkers. The predicted risk of all-cause death at 1 and 3 years was calculated at baseline and re-evaluated after 12 months.ResultsA significant decline in predicted 1-and 3-year mortality risk was observed at 12 months: MAGGIC ~16%, SHFM ~22% and BCN Bio-HF ~15%. In the HF with reduced ejection fraction (HFrEF) subgroup guideline-directed medical therapy led to a complete normalization of left ventricular ejection fraction (≥50%) in almost a third of the patients and to a partial normalization (41–49%) in 30% of them. Repeated risk assessment after 12 months with SHFM and BCN Bio-HF provided adequate discrimination for all-cause 3-year mortality (C-Index: MAGGIC-HF 0.762, SHFM 0.781 and BCN Bio-HF 0.791).ConclusionMortality risk declines in patients with HF managed for 12 months in a multidisciplinary HF clinic. Repeating the mortality risk assessment after optimizing the HF treatment is recommended, particularly in the HFrEF subgroup.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.