Proceedings of International Conference on Neural Networks (ICNN'96)
DOI: 10.1109/icnn.1996.548859
|View full text |Cite
|
Sign up to set email alerts
|

On lateral connections in feed-forward neural networks

Abstract: Feed-forward neural networks trained with back-propagation, or a variation of it, constitute the most widely applied of all synthetic neural network paradigms. Several efforts have been (directed towards faster training of such networks. Most of these efforts attempt to take as large isteps as possible during the training process. However, another potential source of inefficiency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 5 publications
(4 reference statements)
0
10
0
Order By: Relevance
“…[15][16][17] The proposed architecture consists of a feed-forward network with a lateral connection from neuron ( jϪ1) in the hidden layer to neuron j in the hidden layer, as illustrated in Figure 2. [15][16][17] The proposed architecture consists of a feed-forward network with a lateral connection from neuron ( jϪ1) in the hidden layer to neuron j in the hidden layer, as illustrated in Figure 2.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…[15][16][17] The proposed architecture consists of a feed-forward network with a lateral connection from neuron ( jϪ1) in the hidden layer to neuron j in the hidden layer, as illustrated in Figure 2. [15][16][17] The proposed architecture consists of a feed-forward network with a lateral connection from neuron ( jϪ1) in the hidden layer to neuron j in the hidden layer, as illustrated in Figure 2.…”
Section: Methodsmentioning
confidence: 99%
“…15, indicate that the error at a hidden layer neuron is dependent on the error at the output as well as the other hidden layer neurons. 15, indicate that the error at a hidden layer neuron is dependent on the error at the output as well as the other hidden layer neurons.…”
Section: ͑8͒mentioning
confidence: 99%
“…The complete algorithm thus consists of initializing the parameters of the network and using equations (8)-(14) to update the parameters. As an additional note, one may observe that the error propagating back is different for each hidden layer neuron (see equations (10), (12), (13), and (14)). Consequently, these lateral connections also serve to facilitate differentiation of the hidden layer neurons.12"3…”
Section: The Training Algorithmmentioning
confidence: 99%
“…implemented in hardware [26] and which we base our work on. This network architecture, with lateral connections between neighboring neurons in the hidden layer to speed up network convergence during training, is discussed in [12]. The neural network in Figure 2 • A hidden layer PE array with dual ring topology connected to each two adjacent PEs, in which each hidden layer neuron is mapped into a single PE.…”
Section: Outline Of the Thesismentioning
confidence: 99%