2019
DOI: 10.1007/s11063-019-10135-4
|View full text |Cite
|
Sign up to set email alerts
|

A Smoothing Algorithm with Constant Learning Rate for Training Two Kinds of Fuzzy Neural Networks and Its Convergence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…• Initial Learn Rate [60]: the Learn Rate governs the rate at which the algorithm learns the neural network model parameters. At the beginning of the learning process, the model parameters are far from those that minimise the cost function [61], so it will be useful to have a high initial learning rate for a faster decrease.…”
Section: B Choice Of Hyperparameters To Be Optimizedmentioning
confidence: 99%
“…• Initial Learn Rate [60]: the Learn Rate governs the rate at which the algorithm learns the neural network model parameters. At the beginning of the learning process, the model parameters are far from those that minimise the cost function [61], so it will be useful to have a high initial learning rate for a faster decrease.…”
Section: B Choice Of Hyperparameters To Be Optimizedmentioning
confidence: 99%
“…According to the comparison of the classification error and convergence speed of the BP network composed of different hidden layer nodes, the BP network topology used in the classification experiment of the data set in the computer network is finally determined: 12 input layer nodes, 36 hidden layer nodes, and 12 output layer nodes [13]. Carrying on the final classification experiment with the selected network structure, the result is shown as in Figure 3.…”
Section: Neural Network and Classification Algorithmmentioning
confidence: 99%