2021
DOI: 10.3389/fnins.2021.629892
|View full text |Cite
|
Sign up to set email alerts
|

Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks

Abstract: While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
61
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(63 citation statements)
references
References 37 publications
(51 reference statements)
2
61
0
Order By: Relevance
“…The propagation of synaptic plasticity is closely related to the credit assignment of error signals in SNNs. Zhang et al have given an overview introduction of several target propagation methods, such as error propagation, symbol propagation, and label propagation (Frenkel et al, 2019), where the reward propagation can propagate the reward (instead of the traditional error signals) directly to all hidden layers (instead of the traditional layer-to-layer backpropagation). This plasticity is biologically-plausible and will also be used as the main credit assignment of SNNs in our NRR-SNN algorithm.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The propagation of synaptic plasticity is closely related to the credit assignment of error signals in SNNs. Zhang et al have given an overview introduction of several target propagation methods, such as error propagation, symbol propagation, and label propagation (Frenkel et al, 2019), where the reward propagation can propagate the reward (instead of the traditional error signals) directly to all hidden layers (instead of the traditional layer-to-layer backpropagation). This plasticity is biologically-plausible and will also be used as the main credit assignment of SNNs in our NRR-SNN algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…This study aims modify all synaptic weights parallelly without worrying about the gradient vanishing problem, especially for dynamic LIF neurons. Hence, we will pay more attention to the target propagation (Frenkel et al, 2019), as shown in Figure 1A, where the error or other reward-like signals are directly propagated from the output layer to all hidden layers parallelly without losing accuracy.…”
Section: Standard Target Propagationmentioning
confidence: 99%
“…Random matrices are used in certain backpropagation techniques, such as feedback alignment [19], [22], random backpropagation [7], and related algorithms [11], [16]. In this work, the transpose of the forward weight matrices in the back propagation process is replaced with a random matrix, increasing training speed, generalizability and mimicking biological processes.…”
Section: Connection To Other Workmentioning
confidence: 99%
“…The data point is assigned to the class that produces the lowest firing across all layers of the network. This architecture is similar to Direct Random Target Projection (Frenkel et al, 2021) which projects the one-hot encoded targets onto the hidden layers for training multi-layer networks. The notable difference, aside from the neuromorphic aspect, is that we use the input and target information in each layer to train the lateral connections within the layer, and not the feed-forward weights from the preceding layer.…”
Section: Network 3: Including Target Information In Layer-wise Training Of Fully-connected Layersmentioning
confidence: 99%
“…One well-known method is feedback alignment—also known as random backpropagation—which eradicates the weight transport problem by using fixed random weights in the feedback path for propagating error gradient information (Liao et al, 2016 ; Lillicrap et al, 2016 ). Subsequent research showed that directly propagating the output error (Nøkland and Eidnes, 2019 ) or even the raw one-hot encoded targets (Frenkel et al, 2021 ) is sufficient to maintain feedback alignment, and in case of the latter, also eradicates update locking by allowing simultaneous and independent weight updates at each layer. Equilibrium propagation (Scellier and Bengio, 2017 ) is another biologically relevant algorithm for training energy-based models, where the network initially relaxes to a fixed-point of its energy function in response to an external input.…”
Section: Introductionmentioning
confidence: 99%