2019
DOI: 10.3389/fncom.2019.00055
|View full text |Cite
|
Sign up to set email alerts
|

Back-Propagation Learning in Deep Spike-By-Spike Networks

Abstract: Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of de… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 41 publications
0
10
0
Order By: Relevance
“…Since the Boolean inner product is based on the parity function, we first looked how parity can be directly learned. The parity function is notorious for being very hard to learn with back-propagation [ 39 , 40 , 41 , 42 ], especially when somehow minimal networks are used. We then investigated the learnability of the Boolean inner product with the standard solution architectures, the shallow DNF architecture and the deep exact architecture.…”
Section: Resultsmentioning
confidence: 99%
“…Since the Boolean inner product is based on the parity function, we first looked how parity can be directly learned. The parity function is notorious for being very hard to learn with back-propagation [ 39 , 40 , 41 , 42 ], especially when somehow minimal networks are used. We then investigated the learnability of the Boolean inner product with the standard solution architectures, the shallow DNF architecture and the deep exact architecture.…”
Section: Resultsmentioning
confidence: 99%
“…The network architecture. In the case of NNMF and SbS networks, the convolution layer (conv), the full layer, and the output layer are constructed from IPs with their according dynamic (equation 1 and 3) on their latent variables (see [4] for more details). For the CNN, the convolution layers, and the full layer are followed by ReLU layers.…”
Section: Both Nnmfmentioning
confidence: 99%
“…For the CNN, the convolution layers, and the full layer are followed by ReLU layers. Also the pooling layers are max pooling layers instead of average pooing layers used for NNMF and SbS For NNMF and SbS layers we introduced the so called inference populations (IPs) [4]. Every IPs operates independent of the other IPs in a layer.…”
Section: Both Nnmfmentioning
confidence: 99%
See 1 more Smart Citation
“…A combination of MLP and the gradient descent method results in a very effective algorithm, known as the back propagation (BP) algorithm [25,26,27,28,29]. The main idea of the gradient descent method is to make the weight of each node move into the negative direction of the loss function gradient and make the network adjust the weight value of each node by itself.…”
Section: Construction Of the Modelmentioning
confidence: 99%