2020 IEEE International Solid- State Circuits Conference - (ISSCC) 2020
DOI: 10.1109/isscc19947.2020.9062985
|View full text |Cite
|
Sign up to set email alerts
|

15.3 A 351TOPS/W and 372.4GOPS Compute-in-Memory SRAM Macro in 7nm FinFET CMOS for Machine-Learning Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
54
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 181 publications
(54 citation statements)
references
References 2 publications
0
54
0
Order By: Relevance
“…To adequately reflect the additional computational complexity tackled by multibit accelerators, the respective quantization of weight n w and input n x can be factored in, similar to the approach taken in [19], yielding precision scaled TOP/s and TOP/s/W. This is shown in Table III, where recent implementations of analog in-memory MAC-operation accelerators using SRAM combined with capacitors [17], [18], [30], [31] are compared with the presented work.…”
Section: System Implementation Study and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…To adequately reflect the additional computational complexity tackled by multibit accelerators, the respective quantization of weight n w and input n x can be factored in, similar to the approach taken in [19], yielding precision scaled TOP/s and TOP/s/W. This is shown in Table III, where recent implementations of analog in-memory MAC-operation accelerators using SRAM combined with capacitors [17], [18], [30], [31] are compared with the presented work.…”
Section: System Implementation Study and Analysismentioning
confidence: 99%
“…For example, a scheme that could scale in terms of weight and input bits is demonstrated in [30]. In this article, a specific implementation for 4-bit inputs is presented.…”
Section: System Implementation Study and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The energy efficiency of 300 TOPS/W in this work is six times as high as that of a state-of-the-art in-memory-computing AI processor implementing the same BinaryConnect model, while the VLSI technology node used in this work is fourgeneration older than that in the AI processor [25], as shown in Table 2. One of the latest AI processors fabricated using a 7 nm-node VLSI technology [18] is also included in Table 2. It is noted that, even compared with this processor, the energy efficiency of our present processor is comparable.…”
Section: A Evaluation Of Energy Efficiency and Calculation Precisionmentioning
confidence: 99%
“…As an implementation approach other than digital processors, the use of analog operation in CMOS VLSI circuits is a promising method for achieving extremely low-power consumption for such calculation tasks [11]- [14]. In particular, in-memory computing (IMC) approaches, which achieve weighted-sum calculation utilizing the memory circuit, such as static-random-access memory (SRAM), have been popular since around 2016 [15]- [18].…”
mentioning
confidence: 99%