2020 57th ACM/IEEE Design Automation Conference (DAC) 2020
DOI: 10.1109/dac18072.2020.9218524
|View full text |Cite
|
Sign up to set email alerts
|

A Two-way SRAM Array based Accelerator for Deep Neural Network On-chip Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…In DNN+NeuroSim V2.0, we do not consider pipeline among the four key steps in training, i.e. #1 feed-forward, #2 computation of error, #3 computation of weight gradient, and #4 weight update, but the users can potentially optimize the design as done in other works [14] [15]. However, the framework provides an option to build up pipeline system for feed-forward and computation of error, as we assume all the weights are stored on-chip in CIM synaptic arrays, we can process multiple images simultaneously on-chip, i.e.…”
Section: Cim Architecture For Trainingmentioning
confidence: 99%
“…In DNN+NeuroSim V2.0, we do not consider pipeline among the four key steps in training, i.e. #1 feed-forward, #2 computation of error, #3 computation of weight gradient, and #4 weight update, but the users can potentially optimize the design as done in other works [14] [15]. However, the framework provides an option to build up pipeline system for feed-forward and computation of error, as we assume all the weights are stored on-chip in CIM synaptic arrays, we can process multiple images simultaneously on-chip, i.e.…”
Section: Cim Architecture For Trainingmentioning
confidence: 99%
“…SRAM is a kind of volatile memory, which has the advantages of fast access speed, low static power consumption, and high durability, and is a common memory for CIM. Currently, SRAM-CIM can be divided into AD-CIM [1][2] [3] [4], DD-CIM [5], and TD-CIM [6][7] [8] according to the calculation method.…”
Section: Introductionmentioning
confidence: 99%
“…RECOM [ 24 ] was the first CIM-based accelerator to support DNN processing. Jiang et al [ 25 , 26 ] proposed SRAM CIM accelerators for CNN training. However, in several cases, it was assumed that the CIM macros were in the ideal state.…”
Section: Introductionmentioning
confidence: 99%