[Proceedings] Singapore ICCS/ISITA `92 1992
DOI: 10.1109/iccs.1992.254895
|View full text |Cite
|
Sign up to set email alerts
|

Tuning of learning rate and momentum on back-propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

1994
1994
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 1 publication
0
4
0
Order By: Relevance
“…Several methods have been introduced to determine LPs. Kamiyama et al (1992) defined the learning and momentum coefficients as a linear function based on different learning and momentum coefficients data. Zaghwl and Dong (1994) utilized conjugate gradient methods for optimization of the learning and momentum coefficients.…”
Section: Related Workmentioning
confidence: 99%
“…Several methods have been introduced to determine LPs. Kamiyama et al (1992) defined the learning and momentum coefficients as a linear function based on different learning and momentum coefficients data. Zaghwl and Dong (1994) utilized conjugate gradient methods for optimization of the learning and momentum coefficients.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding topology, it has been shown by Hornik, Stinchombe and White [19] that only one hidden layer is sufficient to approximate virtually any function to any degree of accuracy. However, no definitive procedure exists which specifies the optimal choice of the number of hidden nodes, although suggestions based upon theoretical considerations have been advanced by Kurkova [20] [21], and based on empirical consideration by Kamiyama et al [22], among others. In accordance with the findings of Miasek and Lin [8], we examined four different configuration, as follow: Case a: 28-10-4; Case b: 28-15-4; Case C: 28-5-4; and Case d: 28-2-4.…”
Section: Application Of the Clinical Matrixmentioning
confidence: 99%
“…To our knowledge, only a few research studies have analyzed the interdependencies among the backpropagation paradigm parameters regarding their tuning and scaling for different applications. Some parameters are maintained constant, while relationships among the others, representing their influences on learning, are established (Fahlman 1988;Kamiyama et al 1992;Sundararajan et al 1993;Tesauro and Janssens 1988). Only a fraction of the paradigm parameters are then considered in these relationships.…”
Section: Franpis Michaud and Ruben Gonzalez Rubiomentioning
confidence: 99%