2001
DOI: 10.1016/s0893-6080(01)00122-8
|View full text |Cite
|
Sign up to set email alerts
|

Upper bound of the expected training error of neural network regression for a Gaussian noise sequence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0
1

Year Published

2004
2004
2017
2017

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 29 publications
(32 citation statements)
references
References 14 publications
0
31
0
1
Order By: Relevance
“…The network is trained with Matlab-function traingda of the backpropagation algorithm, whereby 2 replicas of the pure symbol alphabet and 4 noised symbol alphabets are passed for 10 times. Practicing, it is sufficient to vary that number from 10 up to 120, as with neuron 10 q < the neural network is trained too slow (Figure 1), and with neuron 120 q > the neural network mean recognition error rate (Figure 2) becomes greater than the error rate for the network, trained without noise (Hagiwara, Hayasaka, Toda, Usui, & Kuno, 2001). It is seen from Figures 1 and 2 = , leading to 49 neurons in the hidden layer.…”
Section: Setting Neurons Into the Hidden Layer For A Practical Casementioning
confidence: 98%
“…The network is trained with Matlab-function traingda of the backpropagation algorithm, whereby 2 replicas of the pure symbol alphabet and 4 noised symbol alphabets are passed for 10 times. Practicing, it is sufficient to vary that number from 10 up to 120, as with neuron 10 q < the neural network is trained too slow (Figure 1), and with neuron 120 q > the neural network mean recognition error rate (Figure 2) becomes greater than the error rate for the network, trained without noise (Hagiwara, Hayasaka, Toda, Usui, & Kuno, 2001). It is seen from Figures 1 and 2 = , leading to 49 neurons in the hidden layer.…”
Section: Setting Neurons Into the Hidden Layer For A Practical Casementioning
confidence: 98%
“…Let the usefulness of 2LP (16) during the training be measured with its performance function "mse" according to the sum of squared errors [8], [9], [22], [23]. Finally, having preset the minimum performance gradient to 6 10  , let the number of epochs be 5000 in order to prevent long-dragging convergence of TP and to shorten the ultimate TP period for each pass.…”
Section: Description Of 2lp (6) Configuration For M6080ismentioning
confidence: 99%
“…However, yielding to PSTSSD 0.08 r  on average (see Fig. 2 Before verification, the best one of 100 2LPs (30) must be trained further until its performance becomes unimprovable [5], [8], [28]. The best 2LP (30) performs at an average CEP, which is 9.31 %.…”
Section: Models Of Stsm6080i and Stsm6080i Ndpdmentioning
confidence: 99%
“…By the way, testing sets differ from training sets because it is sufficient to evaluate CEP only at the SDI maximum [15], [23], [28], [29]. …”
Section: Formalisation Of the 2lp Classifiermentioning
confidence: 99%