2015
DOI: 10.1155/2015/362150
|View full text |Cite
|
Sign up to set email alerts
|

Model and Algorithm of BP Neural Network Based on Expanded Multichain Quantum Optimization

Abstract: The model and algorithm of BP neural network optimized by expanded multichain quantum optimization algorithm with super parallel and ultra-high speed are proposed based on the analysis of the research status quo and defects of BP neural network to overcome the defects of overfitting, the random initial weights, and the oscillation of the fitting and generalization ability along with subtle changes of the network parameters. The method optimizes the structure of the neural network effectively and can overcome a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…Backpropagation is an algorithm used to train a neural network in machine learning. Figure 5 shows the structure of the backpropagation neural network (BP neural network) [26]. In the figure, X1, X2, …, Xn are the input values for the input layer.…”
Section: Neural Network Modelmentioning
confidence: 99%
“…Backpropagation is an algorithm used to train a neural network in machine learning. Figure 5 shows the structure of the backpropagation neural network (BP neural network) [26]. In the figure, X1, X2, …, Xn are the input values for the input layer.…”
Section: Neural Network Modelmentioning
confidence: 99%
“…Modeling. Suppose that the network has R nodes [33] and the transfer function of each layer is of sigmoid type [34]. e following notations are used throughout this section: α 1i denotes the output of the i-th hidden-layer node; α 2k denotes the output of the k-th output-layer node; ω 1ij and ω 2ki represent the weight between node i and node j and the weight between node k and node i, respectively; b 1i denotes the threshold for hiddenlayer node i; and b 2k denotes the threshold for output-layer node k [35].…”
Section: Bp Neural Networkmentioning
confidence: 99%
“…η is the learning rate of the BP neural network. e sigmoid function [55] used as an excitation function in this study is presented as follows:…”
Section: Training For the Bpnnmentioning
confidence: 99%
“…A BPNN comprises an input layer, hidden layer, and output layer [55]. e BPNN model built by the study consists of three layers of neurons, including a hidden layer, as shown in Figure 3.…”
Section: Training For the Bpnnmentioning
confidence: 99%