2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533726
|View full text |Cite
|
Sign up to set email alerts
|

Spiking Neural Networks with Laterally-Inhibited Self-Recurrent Units

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…In the recurrent network, negative weights mainly provide the function of inhibitory stimulation. Here we follow the settings in previous research (Zhang and Li, 2021b , a ) and adopt fixed negative weights. In experiments, fixed negative weights can reduce the optimization complexity without significant performance loss, while providing stable inhibitory connections.…”
Section: Methodsmentioning
confidence: 99%
“…In the recurrent network, negative weights mainly provide the function of inhibitory stimulation. Here we follow the settings in previous research (Zhang and Li, 2021b , a ) and adopt fixed negative weights. In experiments, fixed negative weights can reduce the optimization complexity without significant performance loss, while providing stable inhibitory connections.…”
Section: Methodsmentioning
confidence: 99%
“…Connections. SNN models with recurrent connections were demonstrated to have good performance and larger application scope than SNNs only with forward connections, such as RSNN [28], ST-RSBP [12], RDS-BP [13], LISNN [14], LISR [15], ScSr-SNN [16], SC-ML [17], and SCRNN [29]. For example, in the LISNN model [14], lateral interaction is achieved through a trainable interaction kernel function, which is used to calculate the interaction weights between neighboring neurons.…”
Section: Snn With Recurrentmentioning
confidence: 99%
“…Nowadays, more research focuses on how to make spiking neural networks trained with the BP algorithm have recurrent connections like biological neurons. Some methods have been proposed to achieve spiking neural network models with recurrent connections, such as improved spiking neural networks with lateral interactions (LISNN) [14], laterally inhibited self-recurrent unit (LISR) [15], skip-connected self-recurrent SNN (ScSr-SNN) [16], and sparsely connected recurrent motif layer (SC-ML) [17]. Tese methods achieve recurrent connections within a layer or within a single neuron through fxed-weight intralayer connections, single-neuron self-recurrent connections, and sparsely connected intralayer neural connections.…”
Section: Introductionmentioning
confidence: 99%
“…In order to expand spiking neurons' temporal receptive fields, researchers have made various works according to specific tasks by referring to various ANN structures that already exist. In Zhang and Li ( 2021 ), the author added circular connections to SNN, but the weight of phantom connections was manually set. Zhang and Li ( 2019 ), the author also changed the network into a loop structure and proposed an effective training method.…”
Section: Introductionmentioning
confidence: 99%