2022 IEEE International Solid- State Circuits Conference (ISSCC) 2022
DOI: 10.1109/isscc42614.2022.9731734
|View full text |Cite
|
Sign up to set email alerts
|

ReckOn: A 28nm Sub-mm2 Task-Agnostic Spiking Recurrent Neural Network Processor Enabling On-Chip Learning over Second-Long Timescales

Abstract: The robustness of autonomous inference-only devices deployed in the real world is limited by data distribution changes induced by different users, environments, and task requirements. This challenge calls for the development of edge devices with an always-on adaptation to their target ecosystems. However, the memory requirements of conventional neural-network training algorithms scale with the temporal depth of the data being processed, which is not compatible with the constrained power and area budgets at the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 64 publications
(28 citation statements)
references
References 8 publications
0
28
0
Order By: Relevance
“…In the presence of a teaching signal, a supervised learning framework can be used. For online and on-chip learning systems using events, this can be done through approximations of Backpropagation Through Time [42][43][44] , which can be implemented both on digital 45 , or in-memory memristive neuromorphic hardware 46 . However, in the absence of supervision, MEMSORN-like hardware changes its structure and self-organizes to cluster the input signal.…”
Section: Comparison To Other Neuromorphic Self-organizing Networkmentioning
confidence: 99%
“…In the presence of a teaching signal, a supervised learning framework can be used. For online and on-chip learning systems using events, this can be done through approximations of Backpropagation Through Time [42][43][44] , which can be implemented both on digital 45 , or in-memory memristive neuromorphic hardware 46 . However, in the absence of supervision, MEMSORN-like hardware changes its structure and self-organizes to cluster the input signal.…”
Section: Comparison To Other Neuromorphic Self-organizing Networkmentioning
confidence: 99%
“…If implemented with bit-precise digital circuits or simulated in software, the effects of variability and inhomogeneity and the advantages of the approaches used to cope with them could not be revealed nor exploited, unless explicitly simulated. This holds for standard computers, custom ANN accelerators, and fully digital time-multiplexed neuromorphic computing systems which integrate numerically the dynamic equations to simulate the function of multiple neurons [4, 6, 127]. On the other hand, since these digital systems support fetching data from external memory banks, they can take advantage of the high density of DRAM and implement complex large-scale systems which can accomplish extremely remarkable achievements [128], even without adopting the brain-inspired strategies and methods presented here.…”
Section: Discussionmentioning
confidence: 99%
“…Static power dominates the power consumption in Loihi at ∼1 W. However, an application-specific integrated circuit (ASIC) for SHM could be envisaged with a subset of neurons and optimised functions for pre-/post-processing that brings the resource utilisation closer to 100% operating at the mW range or lower. Several digital and mixed-signal implementations of spiking neural network accelerators have been developed recently that consume power as low as 100 W [ 59 , 60 , 61 , 62 , 63 ]. The prospects that these architectures offer are perfectly suited for SHM since true edge deployment of sophisticated damage-detection algorithms may become viable once a sufficiently low threshold of system power (including acquisition and transmission) is crossed.…”
Section: Discussionmentioning
confidence: 99%