2021
DOI: 10.1002/mma.7550
|View full text |Cite
|
Sign up to set email alerts
|

Application of fractional theory in quantum back propagation neural network

Abstract: In this paper, by applying the theory of fractional calculus to quantum back propagation (BP) neural network, a quantum BP algorithm based on the definition of fractional Grünwald-Letnikoff (G-L) is proposed. We choose the Sigmoid linear superposition function to replace the activation function of the traditional neural network to construct a fractional quantum BP neural network structure.Experimental results prove that this algorithm improves the convergence speed of the network and reduces the convergence er… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…e operation principle of the BP neural network derived above not only includes the three-layer neural network but also applies to the multilayer neural network. According to the weight adjustment formula between adjacent layers, the operation rules and adjustment speed of the whole neural network can be obtained [13].…”
Section: Bp Neural Networkmentioning
confidence: 99%
“…e operation principle of the BP neural network derived above not only includes the three-layer neural network but also applies to the multilayer neural network. According to the weight adjustment formula between adjacent layers, the operation rules and adjustment speed of the whole neural network can be obtained [13].…”
Section: Bp Neural Networkmentioning
confidence: 99%
“…For this problem it is chosen by tuning of experiments 0.85 and 0.95. 46,47 TA B L E 4 Algorithm to receive packet in CQ-routing…”
Section: Dual Reinforcement Q-routing (Drq)mentioning
confidence: 99%
“…In the domain of machine learning and statistics, the learning rate (0.8, 0.1) is used as a tuning parameter for an optimization algorithm to determine the step size at each iteration during getting attained the minimum of a loss function. For this problem it is chosen by tuning of experiments 0.85 and 0.95 46,47 …”
Section: Proposed Model and Analysismentioning
confidence: 99%
“…Besides, the researchers in [19] have considered Caputo's and Grünwald-Letnikov's fractional derivatives-based method for feedforward neural network for optimizing fractional-order delay optimum control problems. Another reported work in [20] has used Grünwald-Letnikov's derivative function as a learning algorithm and fractional calculus to reduce convergence error and speed. e singlepoint search algorithm reported in [21] has made an efficient global learning machine to determine the optimal search path.…”
Section: Introductionmentioning
confidence: 99%