2022
DOI: 10.1088/2634-4386/ac6533
|View full text |Cite
|
Sign up to set email alerts
|

P-CRITICAL: a reservoir autoregulation plasticity rule for neuromorphic hardware

Abstract: Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time. These states must be kept in memory for an undefined period of time which is task-dependent and costly for edge devices. This paper uses the reservoir computing paradigm where an untrained recurrent pool of neurons is used as a preprocessor for temporally structured inputs and with a limited number of training data samples. These so-called reservoirs usually require either extensive fine-tun… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…However, in the case of discontinuous activation functions, such as the one we have with the RBN, it has been shown that the Echo State Property (ESP) cannot be achieved. The spectral radius alone fails to characterize the dynamics and performance of these reservoirs (Oztuik et al, 2007 ; Alexandre et al, 2009 ; Tieck et al, 2018 ; Balafrej et al, 2022 ). In Supplementary material S8.1 , we explicitly discuss the link between ρ and the mean and variance of the weight matrix and show ρ is of no particular interest in the study of the dynamics.…”
Section: Modelmentioning
confidence: 99%
“…However, in the case of discontinuous activation functions, such as the one we have with the RBN, it has been shown that the Echo State Property (ESP) cannot be achieved. The spectral radius alone fails to characterize the dynamics and performance of these reservoirs (Oztuik et al, 2007 ; Alexandre et al, 2009 ; Tieck et al, 2018 ; Balafrej et al, 2022 ). In Supplementary material S8.1 , we explicitly discuss the link between ρ and the mean and variance of the weight matrix and show ρ is of no particular interest in the study of the dynamics.…”
Section: Modelmentioning
confidence: 99%
“…The resulting E/I balance is related to an optimized dynamical network chaos (Van Vreeswijk and Sompolinsky, 1996 ), called the edge of chaos. In previous research (Ivanov and Michmizos, 2021 ; Balafrej et al, 2022 ), edge-of-chaos dynamics were shown to improve the neural coding in an LSM in comparison with a randomly initiated liquids. The balance of excitatory and inhibitory currents can thus be seen as an often neglected heuristic for optimized coding in LSMs.…”
Section: Introductionmentioning
confidence: 91%
“…Recently, several approaches to improving the performance of LSMs have been proposed. Task-agnostic, data-driven training of the recurrent liquid weights (Jin and Li, 2016 ; Ivanov and Michmizos, 2021 ), continuous neuronal adaptation based on intrinsic neuronal plasticity (Zhang and Li, 2019a ), reservoir autoregulation (Balafrej et al, 2022 ), liquid ensembles (Wijesinghe et al, 2019 ), and evolutionary optimization (Zhou et al, 2020 ) have all been shown to improve the basic LSM design, keeping its sparse properties. However, these enhancements come at the cost of increased (training) complexity and data-dependent tuning of the LSM parameters, eliminating some of its inherent advantages.…”
Section: Introductionmentioning
confidence: 99%