2007 International Conference on Management Science and Engineering 2007
DOI: 10.1109/icmse.2007.4421881
|View full text |Cite
|
Sign up to set email alerts
|

The Subjected SPDS Algorithm of Forward Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…We thus expect to see convergence times with an underlying proportionality to because as reduces, the probability of weight clipping increases leading to a lower determinant of the Hessian matrix approximation and consequently a higher rate of convergence. [24]. Reference [22] normalizes the network weights to a predetermined maximum value after each iteration, this leads to convergence issues based on the choice of maximum weight.…”
Section: Proposed Training Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We thus expect to see convergence times with an underlying proportionality to because as reduces, the probability of weight clipping increases leading to a lower determinant of the Hessian matrix approximation and consequently a higher rate of convergence. [24]. Reference [22] normalizes the network weights to a predetermined maximum value after each iteration, this leads to convergence issues based on the choice of maximum weight.…”
Section: Proposed Training Methodsmentioning
confidence: 99%
“…In [23], analog circuits are designed to yield a gain of 5 times the activation function range, a method which becomes invalid for a dynamic network with multiple neurons as gain variation increases with non-linearity and bandwidth. Finally, [24] optimizes the neural network one weight at a time which is an inefficient training method for large networks.…”
Section: Proposed Training Methodsmentioning
confidence: 99%