2021 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2021
DOI: 10.23919/date51398.2021.9474235
|View full text |Cite
|
Sign up to set email alerts
|

TinyADC: Peripheral Circuit-aware Weight Pruning Framework for Mixed-signal DNN Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Moreover, the limited computing resources and capabilities of these devices render it difficult to meet realtime requirements while accomplishing complex tasks. While neural network compression techniques-e.g., pruning [59,60] and quantization [61,62]-together with operating system level optimizations [63,64] are the potential solutions for cost-effective continual learning when using edge devices [65], it is important to note that the research to date has mainly focused on computer vision tasks.…”
Section: Energy Efficiency and Computation Capabilitymentioning
confidence: 99%
“…Moreover, the limited computing resources and capabilities of these devices render it difficult to meet realtime requirements while accomplishing complex tasks. While neural network compression techniques-e.g., pruning [59,60] and quantization [61,62]-together with operating system level optimizations [63,64] are the potential solutions for cost-effective continual learning when using edge devices [65], it is important to note that the research to date has mainly focused on computer vision tasks.…”
Section: Energy Efficiency and Computation Capabilitymentioning
confidence: 99%
“…However, it induces remarkable hardware overhead as all mechanisms such as row indexing, routing controls, and word-line controls are hardware-managed. TinyADC [40] proposes a pruning solution that fixes the number of non-zero weights in each column of the ReRAM crossbar while their positions can vary. This helps to decrease the accumulated value and required ADC resolution.…”
Section: A Reram Crossbar and Reram-based Dnn Accelerationmentioning
confidence: 99%
“…However, the ADC does not scale as fast as the CMOS technology does [18,42]. The recent study [40,42] reports the ADC/DAC blocks may become the major contributor to the total chip area and power. To save power and area, many designs share an ADC with many crossbar columns (e.g., 128 columns in ISAAC).…”
Section: B Challengesmentioning
confidence: 99%
“…Deep Neural Networks have played a significant role in advancing many applications (Yuan et al, 2021;Ding et al, 2017). The field of Natural Language Processing (NLP) leverages Recurrent Neural Networks (RNNs) and Transformers to achieve outstanding performance on many tasks.…”
Section: Introductionmentioning
confidence: 99%