DOI: 10.1007/978-3-540-87732-5_14
|View full text |Cite
|
Sign up to set email alerts
|

MATLAB Simulation and Comparison of Zhang Neural Network and Gradient Neural Network for Time-Varying Lyapunov Equation Solving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…In addition to Proposition 1, if a linear activation-function array F (·) is used, then neuralstate matrix P (t) of ZNN (4) starting from any initial state P 0 ∈ R n×n could exponentially converge to the theoretical time-varying solution P * (t) of (1) with rate γ. If the array F (·) made up of n 2 power-sigmoid activation functions (4) is used, then superior global exponential convergence can be achieved for ZNN (4), as compared to the situation of using linear activation functions.…”
Section: Neural-network Solversmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to Proposition 1, if a linear activation-function array F (·) is used, then neuralstate matrix P (t) of ZNN (4) starting from any initial state P 0 ∈ R n×n could exponentially converge to the theoretical time-varying solution P * (t) of (1) with rate γ. If the array F (·) made up of n 2 power-sigmoid activation functions (4) is used, then superior global exponential convergence can be achieved for ZNN (4), as compared to the situation of using linear activation functions.…”
Section: Neural-network Solversmentioning
confidence: 99%
“…That is, in this time-varying context, gradient-based neural networks only approximately converge to the theoretical solution P * (t) with appreciable residual errors [3]. To remedy, a special kind of recurrent neural network could be generalized from [3][4][5] for solution of this time-varying matrix problem in real-time.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One conventional neural network, termed as gradient-based neural network (GNN) (Zhang, 2005;Zhang, Chen, Ma, & Li, 2007;Zhang & Wang, 2002), based on gradient-descend method to minimize normbased bounded energy function, has already been employed comprehensively to solve such static problems. However, when such a GNN method is applied to solve a time-varying (coefficients) case, a faster convergence rate is often required when compared to the variational rate of time-varying coefficients in real time (Zhang & Ge, 2005;Zhang, Yue, Chen, & Yi, 2008). This may thus impose stringent restrictions on physical realization and/or sacrifice the solution precision.…”
Section: Introductionmentioning
confidence: 94%
“…In this method, by using an optimization algorithm, the samples which form the class borders will be achieved, which call backup vectors. Teaching points that are closest to the point of decision-making can be considered as a subset to define boundaries for decision-making and to be a support vector [39].…”
Section: Support Vector Machinementioning
confidence: 99%