2017
DOI: 10.1016/j.neunet.2017.02.007
|View full text |Cite
|
Sign up to set email alerts
|

Fractional-order gradient descent learning of BP neural networks with Caputo derivative

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
79
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(86 citation statements)
references
References 18 publications
1
79
0
Order By: Relevance
“…This idea is still novel and needs to see improvements. For example, the gradient descent method has been handled by Sheng et al; [32], [33], Wang et al ; [28], Wei et al; [9] and Bao et al [22]. These methods are still early in development.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This idea is still novel and needs to see improvements. For example, the gradient descent method has been handled by Sheng et al; [32], [33], Wang et al ; [28], Wei et al; [9] and Bao et al [22]. These methods are still early in development.…”
Section: Resultsmentioning
confidence: 99%
“…Fractional-order methods have been used to investigate complex-valued neural networks in [24] and recurrent neural network models in [44]. In [28] and [22], gradients based on the Caputo fractional derivative are used to update parameters while integer order gradients are used to handle backpropagation allowing for simpler computation. The experiments therein are shown to improve the accuracy of the neural network's performance compared to integer-order methods while being equally costly.…”
Section: Introductionmentioning
confidence: 99%
“…Simulated system shown in Figures 5-7 is an 8-input and 1-output spiking neural system where the first four kernel functions (k (1) , k (2) , k (3) , k (4) ) undergo step changes and the other two kernels (k (5) , k (6) ) undergo slower gradual changes concurrently. To make the tracking task more difficult, the last two kernels (k (7) , k (8) ) are designed to be zeroes as redundant inputs to the spiking neural system. The transient changes for all step changing kernels occur at 400 s. In particular, the amplitudes of k (1) and k (2) are doubled, while the amplitudes of k (3) and k (4) are decreased by half.…”
Section: Simulation Studiesmentioning
confidence: 99%
“…Peak amplitudes of actual kernels (black) and estimated kernels (blue for SSPPF, green for sSSPPF, purple for dMGLV, and red for the proposed sMGLV model) across simulation time evolution in a time-varying system with a 1 s resolution. (4) 0.0911 0.0882 0.0521 0.0408 k (5) 0.0525 0.0529 0.0260 0.0217 k (6) 0.0405 0.0387 0.0222 0.0162 k (7) 0.0431 0.0000 0.0250 0.0000 k (8) 0.0467 0.0000 0.0371 0.0000 k (9) 0 …”
Section: Simulation Studiesmentioning
confidence: 99%
“…D. C. Huang and S. C. Xie [3] applied BP neural network in monitoring tailings dam settlement. In 2017, J. Wang [11], etc. proposed a fractional gradient descent method for the BP training of neural networks, and the monotonicity and weak (strong) convergence of the proposed approach are proved in detail.…”
Section: Introductionmentioning
confidence: 99%