2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2021
DOI: 10.1109/aicas51828.2021.9458494
|View full text |Cite
|
Sign up to set email alerts
|

A Flexible and Fast PyTorch Toolkit for Simulating Training and Inference on Analog Crossbar Arrays

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 76 publications
(29 citation statements)
references
References 12 publications
0
23
0
Order By: Relevance
“…Then we apply and compare them to SGD and DNN training with different material and reference offsets settings. For simulations, we used the PYTORCHbased (Paszke et al, 2019) open source toolkit 5 IBM Analog Hardware Acceleration Kit (AIHWKIT) (Rasch et al, 2021).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Then we apply and compare them to SGD and DNN training with different material and reference offsets settings. For simulations, we used the PYTORCHbased (Paszke et al, 2019) open source toolkit 5 IBM Analog Hardware Acceleration Kit (AIHWKIT) (Rasch et al, 2021).…”
Section: Resultsmentioning
confidence: 99%
“…Using resistive crossbar arrays to compute an MVM in-memory has been suggested early on (Steinbuch, 1961), and multiple prototype chips where MVMs of DNNs during inference are accelerated have been described (Wan et al, 2022;Khaddam-Aljameh et al, 2021;Xue et al, 2021;Fick et al, 2022;Narayanan et al, 2021). In principle, in all these solutions, the weights of a linear layer are stored in a crossbar array of tunable conductances, inputs are encoded e.g.…”
Section: Analog Matrix-vector Multiplicationmentioning
confidence: 99%
See 1 more Smart Citation
“…We examine the accuracy of DNNs using these PCMs with and without liners using the IBM's analog AI simulation tool. [ 17 ] We compare all devices across 3 DNNs and 4 datasets: ResNet‐32 evaluated on the CIFAR‐10 dataset, 2‐layer LSTM evaluated on the Penn Treebank dataset, BERT network evaluated on MRPC dataset, and BERT network evaluated on MNLI dataset.…”
Section: Dnn Inference Accuracy Investigationmentioning
confidence: 99%
“…Figure 2 displays the classification error in MNIST handwritten digit recognition problem as a function of training epoch for various scenarios of training algorithm and AF values.We perform the experimental simulation using the opensource toolkit aihwkit(Rasch et al, 2021) and further details about the experimental environment can be found in Supplementary Materials section 3.3. First, performing SGD with asymmetric devices results in severe accuracy drop both for the train and test datasets when compared with the software baseline.…”
mentioning
confidence: 99%