2019 Symposium on VLSI Circuits 2019
DOI: 10.23919/vlsic.2019.8778074
|View full text |Cite
|
Sign up to set email alerts
|

Considerations Of Integrating Computing-In-Memory And Processing-In-Sensor Into Convolutional Neural Network Accelerators For Low-Power Edge Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Quantization parameters are optimized by analyzing the statistical inference information and comparing the impacts of different bit widths. The quantization process is denoted in (2), where 𝑎 represents the weight or input, and 𝑎 represents the quantized values. The parameter 𝑆 denotes the fractional length and a larger 𝑆 indicates higher quantization resolution.…”
Section: A Quantization Analyzer (1) Quantizermentioning
confidence: 99%
See 1 more Smart Citation
“…Quantization parameters are optimized by analyzing the statistical inference information and comparing the impacts of different bit widths. The quantization process is denoted in (2), where 𝑎 represents the weight or input, and 𝑎 represents the quantized values. The parameter 𝑆 denotes the fractional length and a larger 𝑆 indicates higher quantization resolution.…”
Section: A Quantization Analyzer (1) Quantizermentioning
confidence: 99%
“…xisting digital processors face the data-transmission bottleneck caused by the intrinsic Von-Neumann structure for computation-intensive tasks [1,2]. Recently, computing-in-memory (CIM) systems have been proposed to overcome this challenge by integrating computing logic into the memory macro [3].…”
Section: Introductionmentioning
confidence: 99%