2019
DOI: 10.1016/j.neunet.2019.08.016
|View full text |Cite
|
Sign up to set email alerts
|

Locally connected spiking neural networks for unsupervised feature learning

Abstract: In recent years, Spiking Neural Networks (SNNs) have demonstrated great successes in completing various Machine Learning tasks. We introduce a method for learning image features by locally connected layers in SNNs using spike-timingdependent plasticity (STDP) rule. In our approach, sub-networks compete via competitive inhibitory interactions to learn features from different locations of the input space. These Locally-Connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biolo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
51
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(58 citation statements)
references
References 35 publications
(58 reference statements)
0
51
0
Order By: Relevance
“…Very recently, (Pogodin et al 2021) proposed bio-inspired dynamic weight sharing and adding lateral connections to locally-connected layers to achieve the same regularization goals of weight sharing and normal convolutional filters. The first work to integrate a locally-connected (LC) layer into an SNN (Saunders et al 2019) used a network with no hidden layers where the rate-coded input is passed to the output layer via local connections. They exploited recurrent inhibitory connections similar to the ones employed by (Diehl and Cook 2015) to simulate a winner-take-all (WTA) inhibition mechanism in their output.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Very recently, (Pogodin et al 2021) proposed bio-inspired dynamic weight sharing and adding lateral connections to locally-connected layers to achieve the same regularization goals of weight sharing and normal convolutional filters. The first work to integrate a locally-connected (LC) layer into an SNN (Saunders et al 2019) used a network with no hidden layers where the rate-coded input is passed to the output layer via local connections. They exploited recurrent inhibitory connections similar to the ones employed by (Diehl and Cook 2015) to simulate a winner-take-all (WTA) inhibition mechanism in their output.…”
Section: Related Workmentioning
confidence: 99%
“…One is allowing the weights to have negative values, which corresponds to having inhibitory neurons, as done in the convolutional layers used by (Lee et al 2018). The other is to use a combination of recurrent inhibitory connections and adaptive thresholds as done by (Diehl and Cook 2015;Saunders et al 2018Saunders et al , 2019. In this work, we used the latter approach for our feature extraction LC layer.…”
Section: Feature Extraction Layer (Local Connections)mentioning
confidence: 99%
See 1 more Smart Citation
“…Lee et al [17] described a proposition to develop spike-based backpropagation idea when [26] shown a training solution using equilibrium propagation. Again, Saunders et al [30] described another unsupervised technique for feature training. Spiking networks can be shallow and as deep as their predecessors.…”
Section: Introductionmentioning
confidence: 99%
“…They are deeply dependent on the biological background of neural works. Among the diversity of neural networks, spiking neural networks (SNNs) are advocated because the kind of networks is more closely mimic natural neural networks, that is, the structure of SNNs is designed to describe realistic brain-like information processing (Bohte et al, 2002;Shrestha and Song, 2017;Saunders et al, 2019). The SNNs-based modelling becomes popular and well-reputed.…”
Section: Introductionmentioning
confidence: 99%