Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design 2022
DOI: 10.1145/3508352.3549360
|View full text |Cite
|
Sign up to set email alerts
|

Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…By introducing the mutation to the children generation it is possible to obtain a better assignment strategy (Lines 13-18). After evaluating all the design points, it will record the throughput optimal point under latency constraints and update the new population by selecting the top solutions (Lines [19][20][21][22][23][24].…”
Section: Ssr Design Space Explorationmentioning
confidence: 99%
See 1 more Smart Citation
“…By introducing the mutation to the children generation it is possible to obtain a better assignment strategy (Lines 13-18). After evaluating all the design points, it will record the throughput optimal point under latency constraints and update the new population by selecting the top solutions (Lines [19][20][21][22][23][24].…”
Section: Ssr Design Space Explorationmentioning
confidence: 99%
“…If a design requires higher throughput which can be achieved by batching more data, the system would have to sacrifice latency. While users can only explore latency throughput tradeoff by changing the batch size when using the off-the-shelf deep learning framework on GPUs, FPGA accelerators [12,13,14,15,16] and other tiled accelerators [17,18,19,20,21,22,23,24] provide more flexibility and users have a larger design space to explore the latency throughput tradeoff.…”
Section: Introductionmentioning
confidence: 99%
“…Among all the non-conventional computing systems, in-memory computing seems like the solution to the von Neumann bottleneck. In-memory computing is a technique that runs arithmetic and logic operations entirely in computer memory, as shown in Figure 1.2a [9][10][11].…”
Section: Non-conventional Computing and Emerging Memory Devicesmentioning
confidence: 99%