2022
DOI: 10.1109/tcsi.2022.3184175
|View full text |Cite
|
Sign up to set email alerts
|

SWPU: A 126.04 TFLOPS/W Edge-Device Sparse DNN Training Processor With Dynamic Sub-Structured Weight Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Output sparsity exploitation during the WG stage has big benefits thanks to both useless computation avoidance and memory access removal. For this reason, recent energy-efficient training processors [26,31,51,78] supported triple sparsity exploitation by combining iterative pruning.…”
Section: Pruning-aware Output Zero Skipping During the Wgmentioning
confidence: 99%
See 1 more Smart Citation
“…Output sparsity exploitation during the WG stage has big benefits thanks to both useless computation avoidance and memory access removal. For this reason, recent energy-efficient training processors [26,31,51,78] supported triple sparsity exploitation by combining iterative pruning.…”
Section: Pruning-aware Output Zero Skipping During the Wgmentioning
confidence: 99%
“…[29] [26,31,51,78] Sparsity Exploitation [17][18][19] was proposed to unify the data representation method of both input operand and accumulation. Flexpoint [55] tried to substitute FP with FXP representation using a shared exponent management algorithm together for simplification of MAC design, but it failed to reduce the required bit-precision to less than 16-bit.…”
Section: A New Number Representationmentioning
confidence: 99%