The high computation and memory storage of large deep neural networks (DNNs) models pose intensive challenges to the conventional Von-Neumann architecture, incurring substantial data movements in the memory hierarchy. The memristor crossbar array has emerged as a promising solution to mitigate the challenges and enable low-power acceleration of DNNs. Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. However, there has been no systematic investigation of memristor-based neuromorphic computing (NC) systems considering both weight pruning and weight quantization. In this paper, we propose an unified and systematic memristorbased framework considering both structured weight pruning and weight quantization by incorporating alternating direction method of multipliers (ADMM) into DNNs training. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real devices, to achieve high accuracy and low power and small area footprint. Our framework is mainly integrated by three steps, i.e., memristorbased ADMM regularized optimization, masked mapping and retraining. Experimental results show that our proposed framework achieves 29.81× (20.88×) weight compression ratio, with 98.38% (96.96%) and 98.29% (97.47%) power and area reduction on VGG-16 (ResNet-18) network where only have 0.5% (0.76%) accuracy loss, compared to the original DNN models. We share our models at anonymous link http://bit.ly/2Jp5LHJ.
Background
Atelectasis is the primary cause of hypoxemia during general anesthesia. This study aimed to evaluate the impact of the combination of recruitment maneuvers (RM) and positive end-expiratory pressure (PEEP) on the incidence of atelectasis in adult women undergoing gynecologic laparoscopic surgery using pulmonary ultrasound.
Methods
In this study, 42 patients with healthy lungs undergoing laparoscopic gynecologic surgery were randomly divided into the recruitment maneuver group (RM group; 6 cm H2O PEEP and RM) or the control group (C group; 6 cm H2O PEEP and no RM), 21 patients in each group. Volume-controlled ventilation was used in all selected patients, with a tidal volume of 6–8 mL·kg−1 of ideal body weight. When atelectasis was detected, patients in the RM group received ultrasound-guided RM, while those in the C group received no intervention. The incidence and severity of atelectasis were determined using lung ultrasound scores.
Results
A total of 41 patients were investigated. The incidence of atelectasis was lower in the RM group (40%) than in the C group (80%) 15 min after arrival in the post-anesthesia care unit (PACU). Meanwhile, lung ultrasound scores (LUSs) were lower in the RM group compared to the C group. In addition, the differences in the LUS between the two groups were mainly due to the differences in lung ultrasound scores in the posterior regions. However, this difference did not persist after 24 h of surgery.
Conclusions
In conclusion, the combination of RM and PEEP could reduce the incidence of atelectasis in patients with healthy lungs 15 min after arrival at the PACU; however, it disappeared within 24 h after surgery.
Trial registration
(Prospectively registered): ChiCTR2000033529. Registered on 4/6/2020.
The state-of-art DNN structures involve high computation and great demand for memory storage which pose intensive challenge on DNN framework resources. To mitigate the challenges, weight pruning techniques has been studied. However, high accuracy solution for extreme structured pruning that combines different types of structured sparsity still waiting for unraveling due to the extremely reduced weights in DNN networks. In this paper, we propose a DNN framework which combines two different types of structured weight pruning (filter and column prune) by incorporating alternating direction method of multipliers (ADMM) algorithm for better prune performance. We are the first to find nonoptimality of ADMM process and unused weights in a structured pruned model, and further design an optimization framework which contains the first proposed Network Purification and Unused Path Removal algorithms which are dedicated to post-processing an structured pruned model after ADMM steps. Some high lights shows we achieve 232× compression on LeNet-5, 60× compression on ResNet-18 CIFAR-10 and over 5× compression on AlexNet. We share our models at anonymous link http://bit.ly/2VJ5ktv.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.