2020
DOI: 10.1080/0954898x.2020.1849842
|View full text |Cite
|
Sign up to set email alerts
|

Data classification based on fractional order gradient descent with momentum for RBF neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…L is the truncation order representing the order of truncation and T denotes the discrete sampling period. Letting α as 1 in (19), T as 1 in (22), replacing r(t) with w n (t) and t with t + 1 in (22), the velocity update expression of the FO-PSO is written as [61]:…”
Section: Fractional Order Swarm Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…L is the truncation order representing the order of truncation and T denotes the discrete sampling period. Letting α as 1 in (19), T as 1 in (22), replacing r(t) with w n (t) and t with t + 1 in (22), the velocity update expression of the FO-PSO is written as [61]:…”
Section: Fractional Order Swarm Optimizationmentioning
confidence: 99%
“…Fractional calculus operators are also exploited to design novel recursive/adaptive algorithms as well as evolutionary/swarm computation heuristics for different optimization tasks involved in engineering and science applications. For example, fractional gradient descent/fractional least mean square algorithm was proposed for various applications including recommender systems [10], channel estimation [11], automatic identification system [12], power system optimization [13], economics [14], radar signal processing [15], system identification [16,17], Hammerstein output error identification [18], wireless sensor network [19], neural network optimization [20][21][22][23][24], chaotic time-series prediction [25,26], oscillator [27], vibration rejection [28], nonlinear AR-MAX identification [29] and parameter estimation of input nonlinear control autoregressive (IN-CAR) systems [29,30].…”
Section: Introduction 1literature Reviewmentioning
confidence: 99%
“…The performance of the proposed method is compared with a BP algorithm based on stochastic gradient descent (SGBP), 11 a constrained optimization method based on a BP neural network (CO-BP), 23 an RBF network based on fuzzy c-means clustering (FCRBF), 14 and an optimized RBF network based on fractional order gradient descent with momentum (FOGDM-RBF). 24 In each dataset, all data samples in each dataset are scaled to [−1, 1], the number of kernels in the RBF mapping layer is adjusted manually based on the distribution of the sample space, and the number of BP hidden layers is set to one and two layers; the number of BP hidden layer nodes was set between two and nine, the network learning rate was adjusted iteratively using the simulated annealing algorithm, and the sigmoid kernel parameters were set as a = 1 . 1716 and b = 0 . 6667 . The operating environment of the experiment was an Intel (R) core i7-9700, 3.00 GHz CPU, 8GB RAM, and MATLAB 2013.…”
Section: Experimental Comparison and Analysismentioning
confidence: 99%
“…In this study, several MLP-ANN methods including Levenberg-Marquardt (32), Bayesian Regularization (33), BFGS quasi Newton backpropagation (34) 41), Gradient Descent with Momentum and Gradient Descent (42) were designed based on trialand-error methods with different con gurations. The structure of neural networks was changed by changing the number of hidden layers neurons from 5 to 30 and using 12 different learning algorithms (see Table 2).…”
Section: Model Developmentmentioning
confidence: 99%