Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both timeconsuming and sub-optimal. There are plenty of specialized hardware for neural networks, but little research has been done for specialized neural network optimization for a particular hardware architecture. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95× and the energy consumption by 1.9× with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design. * indicates equal contributions. 68 69 70 71 72 73 25 44 63 82 101 120 MobileNets (fixed 8-bit quantization) MobileNets (our flexible-bit quantization) Latency (ms) Top-1 Accuracy (%) 1MB 2MB 3MB Model Size: Figure 1: We need mixed precision for different layers. We quantize MobileNets [12] to different number of bits (both weights and activations), and it lies on a better pareto curve (yellow) than fixed bit quantization (blue). The reason is that different layers have different redundancy and have different arithmetic intensity (OPs/byte) on the hardware, which advocates for using mixed precision for different layers.
We present APQ for efficient deep learning inference on resource-constrained hardware. Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner. To deal with the larger design space it brings, a promising approach is to train a quantization-aware accuracy predictor to quickly get the accuracy of the quantized model and feed it to the search engine to select the best fit. However, training this quantization-aware accuracy predictor requires collecting a large number of quantized model, accuracy pairs, which involves quantization-aware finetuning and thus is highly time-consuming. To tackle this challenge, we propose to transfer the knowledge from a fullprecision (i.e., fp32) accuracy predictor to the quantizationaware (i.e., int8) accuracy predictor, which greatly improves the sample efficiency. Besides, collecting the dataset for the fp32 accuracy predictor only requires to evaluate neural networks without any training cost by sampling from a pretrained once-for-all [3] network, which is highly efficient. Extensive experiments on ImageNet demonstrate the benefits of our joint optimization approach. With the same accuracy, APQ reduces the latency/energy by 2×/1.3× over MobileNetV2+HAQ [30,36]. Compared to the separate optimization approach (ProxylessNAS+AMC+HAQ [5,12,36]), APQ achieves 2.3% higher ImageNet accuracy while reducing orders of magnitude GPU hours and CO 2 emission, pushing the frontier for green AI that is environmentalfriendly. The code and video are publicly available.
Background and aimThe sensitivity and specificity of biomarkers and scoring systems used for predicting fatality of severe sepsis patients remain unsatisfactory. This study aimed to determine the prognostic value of circulating plasma DNA levels in severe septic patients presenting at the Emergency Department (ED).MethodsSixty-seven consecutive patients with severe sepsis and 33 controls were evaluated. Plasma DNA levels were estimated by real-time quantitative polymerase chain reaction assay using primers for the human β-hemoglobin and ND2 gene. The patients’ clinical and laboratory data on admission were analyzed.ResultsThe median plasma nuclear and mitochondria DNA levels for severe septic patients on admission were significantly higher than those of the controls. The mean plasma nuclear DNA level on admission correlated with lactate concentration (γ = 0.36, p = 0.003) and plasma mitochondrial DNA on admission (γ = 0.708, p < 0.001). Significant prognostic factors for fatality included mechanical ventilation within the first 24 hours (p = 0.013), mean sequential organ failure assessment (SOFA) score on admission (p = 0.04), serum lactate (p < 0.001), and both plasma nuclear and mitochondrial DNA on admission (p < 0.001). Plasma mitochondrial DNA was an independent predictor of fatality by stepwise logistic regression such that an increase by one ng/mL in level would increase fatality rate by 0.7%.ConclusionPlasma DNA has potential use for predicting outcome in septic patients arriving at the emergency room. Plasma mitochondrial DNA level on admission is a more powerful predictor than lactate concentration or SOFA scores on admission.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.