Most deep neural networks deployed today are trained using GPUs via high-level frameworks such as Tensor-Flow [1] and PyTorch [2]. This paper describes changes we made to the GPGPU-Sim simulator [3], [4] to enable it to run PyTorch by running PTX kernels included in NVIDIA's cuDNN [5] library. We use the resulting modified simulator, which has been made available publicly with this paper 1 , to study some simple deep learning workloads. With our changes to GPGPU-Sim's functional simulation model we find GPGPU-Sim performance model running a cuDNN enabled implementation of LeNet for MNIST reports results within 30% of real hardware. Using GPGPU-Sim's AerialVision performance analysis tool we observe that cuDNN API calls contain many varying phases and appear to include potentially inefficient microarchitecture behavior such as DRAM partition bank camping, at least when executed on GPGPU-Sim's current performance model.
Deep Neural Networks (DNNs) are the state of art in image, speech, and text processing. To address long training times and high energy consumption, custom accelerators can exploit sparsity, that is zero-valued weights, activations, and gradients. Proposed sparse Convolution Neural Network (CNN) accelerators support training with no more than one dynamic sparse convolution input. Among existing accelerator classes, the only ones supporting two-sided dynamic sparsity are outer-product-based accelerators. However, when mapping a convolution onto an outer product, multiplications occur that do not correspond to any valid output. These Redundant Cartesian Products (RCPs) decrease energy efficiency and performance. We observe that in sparse training, up to 90% of computations are RCPs resulting from the convolution of large matrices for weight updates during the backward pass of CNN training.In this work, we design a mechanism, ANT, to anticipate and eliminate RCPs, enabling more efficient sparse training when integrated with an outer-product accelerator. By anticipating over 90% of RCPs, ANT achieves a geometric mean of 3.71× speed up over an on 90% sparse training using ResNet18 [35], VGG16 [73], Wide ResNet (WRN) [85],, with 4.40× decrease in energy consumption and 0.0017mm 2 of additional area. We extend ANT to sparse matrix multiplication, so that the same accelerator can anticipate RCPs in sparse fully-connected layers, transformers, and RNNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.