In patients with DM2, CA of AF provides significant clinical benefits over the ADT and appears to be a reasonable approach regarding feasibility, effectiveness, and low procedural risk.
AimPredictions of plant traits over space and time are increasingly used to improve our understanding of plant community responses to global environmental change. A necessary step forward is to assess the reliability of global trait predictions. In this study, we predict community mean plant traits at the global scale and present a systematic evaluation of their reliability in terms of the accuracy of the models, ecological realism and various sources of uncertainty.LocationGlobal.Time periodPresent.Major taxa studiedVascular plants.MethodsWe predicted global distributions of community mean specific leaf area, leaf nitrogen concentration, plant height and wood density with an ensemble modelling approach based on georeferenced, locally measured trait data representative of the plant community. We assessed the predictive performance of the models, the plausibility of predicted trait combinations, the influence of data quality, and the uncertainty across geographical space attributed to spatial extrapolation and diverging model predictions.ResultsEnsemble predictions of community mean plant height, specific leaf area and wood density resulted in ecologically plausible trait–environment relationships and trait–trait combinations. Leaf nitrogen concentration, however, could not be predicted reliably. The ensemble approach was better at predicting community trait means than any of the individual modelling techniques, which varied greatly in predictive performance and led to divergent predictions, mostly in African deserts and the Arctic, where predictions were also extrapolated. High data quality (i.e., including intraspecific variability and a representative species sample) increased model performance by 28%.Main conclusionsPlant community traits can be predicted reliably at the global scale when using an ensemble approach and high‐quality data for traits that mostly respond to large‐scale environmental factors. We recommend applying ensemble forecasting to account for model uncertainty, using representative trait data, and more routinely assessing the reliability of trait predictions.
Background
Tocilizumab blocks pro-inflammatory activity of interleukin-6 (IL-6), involved in pathogenesis of pneumonia the most frequent cause of death in COVID-19 patients.
Methods
A multicenter, single-arm, hypothesis-driven trial was planned, according to a phase 2 design, to study the effect of tocilizumab on lethality rates at 14 and 30 days (co-primary endpoints, a priori expected rates being 20 and 35%, respectively). A further prospective cohort of patients, consecutively enrolled after the first cohort was accomplished, was used as a secondary validation dataset. The two cohorts were evaluated jointly in an exploratory multivariable logistic regression model to assess prognostic variables on survival.
Results
In the primary intention-to-treat (ITT) phase 2 population, 180/301 (59.8%) subjects received tocilizumab, and 67 deaths were observed overall. Lethality rates were equal to 18.4% (97.5% CI: 13.6–24.0, P = 0.52) and 22.4% (97.5% CI: 17.2–28.3, P < 0.001) at 14 and 30 days, respectively. Lethality rates were lower in the validation dataset, that included 920 patients. No signal of specific drug toxicity was reported. In the exploratory multivariable logistic regression analysis, older age and lower PaO2/FiO2 ratio negatively affected survival, while the concurrent use of steroids was associated with greater survival. A statistically significant interaction was found between tocilizumab and respiratory support, suggesting that tocilizumab might be more effective in patients not requiring mechanical respiratory support at baseline.
Conclusions
Tocilizumab reduced lethality rate at 30 days compared with null hypothesis, without significant toxicity. Possibly, this effect could be limited to patients not requiring mechanical respiratory support at baseline.
Registration EudraCT (2020-001110-38); clinicaltrials.gov (NCT04317092).
Aim
The recent recovery of large carnivores in Europe has been explained as resulting from a decrease in human persecution driven by widespread rural land abandonment, paralleled by forest cover increase and the consequent increase in availability of shelter and prey. We investigated whether land cover and human population density changes are related to the relative probability of occurrence of three European large carnivores: the grey wolf (Canis lupus), the Eurasian lynx (Lynx lynx) and the brown bear (Ursus arctos).
Location
Europe, west of 64° longitude.
Methods
We fitted multi‐temporal species distribution models using >50,000 occurrence points with time series of land cover, landscape configuration, protected areas, hunting regulations and human population density covering a 24‐year period (1992–2015). Within the temporal window considered, we then predicted changes in habitat suitability for large carnivores throughout Europe.
Results
Between 1992 and 2015, the habitat suitability for the three species increased in Eastern Europe, the Balkans, North‐West Iberian Peninsula and Northern Scandinavia, but showed mixed trends in Western and Southern Europe. These trends were primarily associated with increases in forest cover and decreases in human population density, and, additionally, with decreases in the cover of mosaics of cropland and natural vegetation.
Main conclusions
Recent land cover and human population changes appear to have altered the habitat suitability pattern for large carnivores in Europe, whereas protection level did not play a role. While projected changes largely match the observed recovery of large carnivore populations, we found mismatches with the recent expansion of wolves in Central and Southern Europe, where factors not included in our models may have played a dominant role. This suggests that large carnivores’ co‐existence with humans in European landscapes is not limited by habitat availability, but other factors such as favourable human tolerance and policy.
COVID-19 outbreak had a major impact on the organization of care in Italy, and a survey to evaluate provision of for arrhythmia during COVID-19 outbreak (March-April 2020) was launched. A total of 104 physicians from 84 Italian arrhythmia centres took part in the survey. The vast majority of participating centres (95.2%) reported a significant reduction in the number of elective pacemaker implantations during the outbreak period compared to the corresponding two months of year 2019 (50.0% of centres reported a reduction of > 50%). Similarly, 92.9% of participating centres reported a significant reduction in the number of implantable cardioverter-defibrillator (ICD) implantations for primary prevention, and 72.6% a significant reduction of ICD implantations for secondary prevention (> 50% in 65.5 and 44.0% of the centres, respectively). The majority of participating centres (77.4%) reported a significant reduction in the number of elective ablations (> 50% in 65.5% of the centres). Also the interventional procedures performed in an emergency setting, as well as acute management of atrial fibrillation had a marked reduction, thus leading to the conclusion that the impact of COVID-19 was disrupting the entire organization of health care, with a massive impact on the activities and procedures related to arrhythmia management in Italy.
AimsThis registry was created to describe the experience of 76 Italian centres with a large cohort of recipients of multipoint pacing (MPP) capable cardiac resynchronization therapy (CRT) devices.Methods and resultsA total of 507 patients in whom these devices had been successfully implanted were enrolled between August 2013 and May 2015. We analysed: (i) current clinical practices for the management of such patients, and (ii) the impact of MPP on heart failure clinical composite response and on the absolute change in ejection fraction (EF) at 6 months. Multipoint pacing was programmed to ‘ON’ in 46% of patients before discharge. Methods of optimizing MPP programming were most commonly based on either the greatest narrowing of the QRS complex (38%) or the electrical delays between the electrodes (34%). Clinical and echocardiographic follow-up data were evaluated in 232 patients. These patients were divided into two groups according to whether MPP was programmed to ‘ON’ (n = 94) or ‘OFF’ (n = 138) at the time of discharge. At 6 months, EF was significantly higher in the MPP group than in the biventricular-pacing group (39.1 ± 9.6 vs. 34.7 ± 7.6%; P < 0.001). Even after adjustments, early MPP activation remained an independent predictor of absolute increase in LVEF of ≥5% (odds ratio 2.5; P = 0.001). At 6 months, an improvement in clinical composite score was recorded in a greater proportion of patients with MPP-ON than in controls (56 vs. 38%; P = 0.009). On comparing optimal MPP and conventional vectors, QRS was also seen to have decreased significantly (P < 0.001).ConclusionThis study provides information that is essential in order to deal with the expected increase in the number of patients receiving MPP devices in the coming years. The results revealed different practices among centres, and establishing the optimal programming that can maximize the benefit of MPP remains a challenging issue. Compared with conventional CRT, MPP improved clinical status and resulted in an additional increase in EF.Clinical Trial Registration
http://www.clinicaltrial.gov/. Unique identifier: NCT02606071.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.