2023 IEEE International Solid- State Circuits Conference (ISSCC) 2023
DOI: 10.1109/isscc42615.2023.10067650
|View full text |Cite
|
Sign up to set email alerts
|

22.6 ANP-I: A 28nm 1.5pJ/SOP Asynchronous Spiking Neural Network Processor Enabling Sub-O.1 μJ/Sample On-Chip Learning for Edge-AI Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 10 publications
0
1
0
Order By: Relevance
“…Table 6 summarizes the performance and specifications of state-of-the-art neuromorphic chips. Mixed-signal designs with analog neurons and synapse computation and high-speed digital peripherals are grouped on the left [ 4 , 12 , 34 ], and digital designs, including Darwin3, are grouped on the right [ 5–8 , 10 , 11 , 30 , 35–37 ]. The critical metrics for efficient spiking neuromorphic hardware platforms are the scale of neurons and synapses, model construction capabilities, synaptic plasticity and the energy per synaptic operation.…”
Section: Resultsmentioning
confidence: 99%
“…Table 6 summarizes the performance and specifications of state-of-the-art neuromorphic chips. Mixed-signal designs with analog neurons and synapse computation and high-speed digital peripherals are grouped on the left [ 4 , 12 , 34 ], and digital designs, including Darwin3, are grouped on the right [ 5–8 , 10 , 11 , 30 , 35–37 ]. The critical metrics for efficient spiking neuromorphic hardware platforms are the scale of neurons and synapses, model construction capabilities, synaptic plasticity and the energy per synaptic operation.…”
Section: Resultsmentioning
confidence: 99%
“…In order to realize a low-power neuromorphic processor enabling on-chip learning with low learning energy overhead for edge-AI applications, we propose a 28-nm 1.25-mm 2 asynchronous neuromorphic processor (ANP-I) [20] with 8-b/10-b weight precision that enables on-chip learning for edge-AI tasks in this article. ANP-I uses a hierarchical update skip (HUS) mechanism to reduce learning energy and a randomly selected target window (TW) to reduce the number of spikes used in learning.…”
mentioning
confidence: 99%