1993
DOI: 10.1109/79.180705
|View full text |Cite
|
Sign up to set email alerts
|

Progress in supervised neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
342
0
31

Year Published

1999
1999
2014
2014

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,133 publications
(373 citation statements)
references
References 71 publications
0
342
0
31
Order By: Relevance
“…2). The network is trained using a standard backpropagation algorithm [12] with randomly initialized weights. The backpropagation weight update rule, also called generalized delta rule reads as follows: where p is the learning factor (a constant), l j the error of neuron j (the difference between the real output and the teaching input), net j the net input in neuron j, t j the teaching input of neuron j, o i the output of the preceding neuron j, i the index of a predecessor of the current neuron j with link w ij from i to j, j the index of the current neuron, and k the index of a successor to the current neuron j with link w jk from j to k. A new activation a j (t) of neuron j in step t is computed using the sigmoidal function as transfer function f:…”
Section: Classification Using Neural Networkmentioning
confidence: 99%
“…2). The network is trained using a standard backpropagation algorithm [12] with randomly initialized weights. The backpropagation weight update rule, also called generalized delta rule reads as follows: where p is the learning factor (a constant), l j the error of neuron j (the difference between the real output and the teaching input), net j the net input in neuron j, t j the teaching input of neuron j, o i the output of the preceding neuron j, i the index of a predecessor of the current neuron j with link w ij from i to j, j the index of the current neuron, and k the index of a successor to the current neuron j with link w jk from j to k. A new activation a j (t) of neuron j in step t is computed using the sigmoidal function as transfer function f:…”
Section: Classification Using Neural Networkmentioning
confidence: 99%
“…Nevertheless, it is very hard to choose the number of hidden layers (30). Most of literatures indicate that one hidden layer is good enough to validate the prediction and maybe the best to decide for all applied feed-forward network design (38). Thus, in this paper one hidden layer was used for modeling ( Figure 2).…”
Section: Ann Modelingmentioning
confidence: 99%
“…The back-propagation error surface usually consists of a large amount of flat regions as well as extremely steep regions (Hush and Horne, 1993), so there is difficulty how to choose an appropriate value of learning rate. As such, the backpropagation algorithm with a variable learning is desirable [1] .…”
Section: Variable Learning Ratementioning
confidence: 99%
“…Learning phase: In this stage, the network weights are randomly generated and the training process will try to adjust the weights so that the actual output comes out closer to the expected output [8] according to the following procedures: One of the patterns to be learned will be put on the input units. Values of the output of hidden layers units and output layer units are calculated.…”
Section: • Execute Phasementioning
confidence: 99%