2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 2020
DOI: 10.1109/micro50266.2020.00020
|View full text |Cite
|
Sign up to set email alerts
|

Look-Up Table based Energy Efficient Processing in Cache Support for Neural Network Acceleration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(7 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Based on this observation, we can reduce the number of storage entries using two-step optimization. In the first optimization step, we remove the even part of the LUT input data pair as described in [ 38 ]. The multiplication of even data and of other data (odd or even) includes two cases: ❶ It is decomposed into the multiplication of two odd numbers, and the multiplication result is then shifted to obtain the final result.…”
Section: Efficient Search-based Computing Schemementioning
confidence: 99%
“…Based on this observation, we can reduce the number of storage entries using two-step optimization. In the first optimization step, we remove the even part of the LUT input data pair as described in [ 38 ]. The multiplication of even data and of other data (odd or even) includes two cases: ❶ It is decomposed into the multiplication of two odd numbers, and the multiplication result is then shifted to obtain the final result.…”
Section: Efficient Search-based Computing Schemementioning
confidence: 99%
“…NDPproposals have explored many memory architectures. In the literature, SRAM-based NDP proposals mostly aim to insert logic capabilities to the host's cache memories or to the host's memory controllers [17,19,20,21,22,23,24,58,59,103,106]. This work modify the cache hierarchy trying to avoid moving data from the main memory and cache memories to the host's core.…”
Section: B Memory Architectures and Ndpmentioning
confidence: 99%
“…Yin et al [21] part from the same base idea as Neural Cache, but enable more scalability by using XNOR-Accumulate operations to enable activation of multiple SRAM rows, double buffers to hide in-memory reprogramming latencies, and additional peripheral logic for multi-bit activation. Ramanathan et al [22] propose the BFree, a bit-line free LUTbased NDP in SRAM subarrays that allows reconfigurable precision and NN layout. Long et al [23] further optimize NN for these architectures, showcasing their potential for practical usage with LeNet, AlexNet, VGGNet, and ResNet Convolutional Neural Network (CNN) architectures.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The look-up table (LUT)-based CIM architecture is proposed to solve the issue of mixed-based CIM solution [47]. In LUT-based CIM, two memory rows are designed for LUT, supporting two modes: storage and computation modes.…”
Section: Cim Architecture Beyond Mixed Solutionmentioning
confidence: 99%