Background This study aimed to assess the survival outcomes among patients with out-of-hospital cardiac arrest (CA) who received cardiopulmonary resuscitation (CPR) in China. Methods Relevant studies, published between January 1, 2010 and September 5, 2022, were retrieved from databases, including EMBASE, PubMed, Cochrane Library, the China Biology Medicine disk, China National Knowledge Infrastructure, and Wanfang databases. We included clinical studies in which all patients were diagnosed with CA and underwent out-of-hospital CPR, and the outcome variables were at least one of the following: return of spontaneous circulation (ROSC), survival to admission, survival to hospital discharge, 1-month survival, achieved good neurological outcomes, and 1-year survival. Two investigators independently extracted the study data and assessed its quality using a modified Newcastle–Ottawa Scale tool. The data were pooled using random-effects models. Results Of the 3620 identified studies, 49 (63,378 patients) were included in the meta-analysis. The pooled ROSC rate was 9.0% (95% confidence interval [CI] 7.5–10.5%, I2 = 97%), the pooled survival to admission rate was 5.0% (95% CI 2.7–8.0%, I2 = 98%), and the pooled survival to discharge rate was 1.8% (95% CI 1.2–2.5%, I2 = 95%). Additionally, the ROSC rate of patients with bystander CPR was significantly higher than that of those without bystander CPR, and the pooled odds ratio (OR) was 7.92 (95% CI 4.32–14.53, I2 = 85%). The ROSC rate of participants who started CPR within 5 min was significantly higher than that of those who started CPR after 5 min, and the pooled OR was 5.92 (95% CI 1.92–18.26, I2 = 85%). The ROSC rate of participants with defibrillation was significantly higher than that of those without defibrillation, and the pooled OR was 8.52 (95% CI 3.72–19.52, I2 = 77%). Conclusion The survival outcomes of out-of-hospital CPR in China are far below the world average. Therefore, the policy of providing automated external defibrillators (AEDs) in public places and strengthening CPR training for healthcare providers and public personnel should be encouraged and disseminated nationwide. Trial registration This study was registered in PROSPERO (CRD42022326165) on 29 April 2022.
Maintaining long-term exploration ability remains one of the challenges of deep reinforcement learning (DRL). In practice, the reward shaping-based approaches are leveraged to provide intrinsic rewards for the agent to incentivize motivation. However, most existing IRS modules rely on attendant models or additional memory to record and analyze learning procedures, which leads to high computational complexity and low robustness. Moreover, they overemphasize the influence of a single state on exploration, which cannot evaluate the exploration performance from the global perspective. To tackle the problem, state entropy-based methods are proposed to encourage the agent to visit the state space more equitably. However, the estimation error and sample complexity are prohibitive when handling environments with high-dimensional observation. In this paper, we introduce a novel metric entitled Jain's fairness index (JFI) to replace the entropy regularizer, which requires no additional models or memory. In particular, JFI overcomes the vanishing intrinsic rewards problem and can be generalized into arbitrary tasks. Furthermore, we use a variational auto-encoder (VAE) model to capture the life-long novelty of states. Finally, the global JFI score and local state novelty are combined to form a multimodal intrinsic reward, controlling the exploration extent more precisely. Finally, extensive simulation results demonstrate that our multimodal reward shaping (MMRS) method can achieve higher performance in contrast to other benchmark schemes. Our code is available at our GitHub website 1 .1 https://github.com/yuanmingqi/MMRS Preprint. Under review.
One of the most critical challenges in deep reinforcement learning is to maintain the long-term exploration capability of the agent. To tackle this problem, it has been recently proposed to provide intrinsic rewards for the agent to encourage exploration. However, most existing intrinsic rewardbased methods proposed in the literature fail to provide sustainable exploration incentives, a problem known as vanishing rewards. In addition, these conventional methods incur complex models and additional memory in their learning procedures, resulting in high computational complexity and low robustness. In this work, a novel intrinsic reward module based on the Rényi entropy is proposed to provide high-quality intrinsic rewards. It is shown that the proposed method actually generalizes the existing state entropy maximization methods. In particular, a k-nearest neighbor estimator is introduced for entropy estimation while a kvalue search method is designed to guarantee the estimation accuracy. Extensive simulation results demonstrate that the proposed Rényi entropy-based method can achieve higher performance as compared to existing schemes. The simulation code used in this work is available in the following GitHub website 1 .Impact Statement-Reinforcement learning (RL) has demonstrated impressive performance in many complex games such as Go and StarCraft. However, the existing RL algorithms suffer from prohibitively expensive computational complexity, poor generalization ability, and low robustness, which hinders its practical applications in the real world. Thus, it is essential to develop more effective RL algorithms for real-life applications such as autonomous driving and smart manufacturing. To tackle this problem, one critical design challenge is to improve the exploration mechanism of RL to realize efficient policy learning. This work proposes a simple and yet, effective method that can significantly improve the exploration ability of RL algorithms, which can be easily applied to real-life applications. For instance, it will facilitate the development of more powerful autonomous driving systems that can adapt to more complex and challenging environments. Finally, this work is also expected to inspire more subsequent research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.