2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) 2018
DOI: 10.1109/dac.2018.8465860
|View full text |Cite
|
Sign up to set email alerts
|

CMP-PIM: An Energy-Efficient Comparator-based Processing-In-Memory Neural Network Accelerator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 30 publications
(32 citation statements)
references
References 10 publications
0
32
0
Order By: Relevance
“…The possible reasons that there was a lack of concerns on the security of network parameters may come in twofold: 1) The neural network is widely recognized as a robust system against parameter variations. 2) The DNNs are used to be only deployed on the high-performance computing system (e.g., CPUs, GPUs, and other accelerators [13,14]), which normally contains a variety of methods ensuring data integrity. Thus, attacking the parameters is more related to a system cyber-security topic.…”
Section: Introductionmentioning
confidence: 99%
“…The possible reasons that there was a lack of concerns on the security of network parameters may come in twofold: 1) The neural network is widely recognized as a robust system against parameter variations. 2) The DNNs are used to be only deployed on the high-performance computing system (e.g., CPUs, GPUs, and other accelerators [13,14]), which normally contains a variety of methods ensuring data integrity. Thus, attacking the parameters is more related to a system cyber-security topic.…”
Section: Introductionmentioning
confidence: 99%
“…When some local features are not well extracted, the local denoising effect will be degraded. Recently, depthwise separable convolution (DSConv) has been used in many advanced neural networks, such as Xception [ 42 ], MobileNets [ 43 ], and MobileNets2 [ 44 ], to replace the standard convolutional layer, aiming to reduce CNN computational cost and to extract local features [ 45 ].…”
Section: Methodsmentioning
confidence: 99%
“…However, the majority of the inputs/outputs are moved across MAC arrays and from global buffers. Table 2 [49][50][51][52][53][54][55][56][57][58][59][60][61] reviews parts of recently emerged embedded NVM-based MAC operations. Most of the publications were applied to inference applications and depend on ADC-DAC blocks for signal conversion.…”
Section: Architecture-level Explorationmentioning
confidence: 99%
“…Numerous in-MRAM computing schemes have been demonstrated using on-chip arrays to realize fundamental Boolean logic operations (i.e., AND, OR, XOR, and FA) and complex arithmetical functions (i.e., neural networks). Table 3 [26,31,34,58,59,[68][69][70][71][72][73][74][75][76] shows the literature study on recent in-MRAM computing. Main approaches are executed using bit-cell modification, reference adaptation, PTL-enabled NMC, and in-memory analog computation schemes.…”
Section: The State-of-the-art Of In-mram Computingmentioning
confidence: 99%