2016
DOI: 10.1049/el.2016.0186
|View full text |Cite
|
Sign up to set email alerts
|

Extended Wang neural network for online solving a set of linear equations

Abstract: An extended Wang neural network (EWNN) is proposed to solve online a set of linear equations. Such EWNN is possessing the general nonlinear model form with redundant parts, to face existence of nonlinearity phenomena in circuit implementation of Wang neural network (WNN). Furthermore, two types of nonlinear activation are proposed for EWNN aiming to improve the convergence of the WNN. Illustrative results verify the proposed EWNN for online solving linear equations.Introduction: The problem of finding solution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…This is due to the ease in implementability of NNs, their parallel processing capabilities, and their overall fast error convergence rates. Specifically, recurrent neural network (RNN) models are based on a vector-valued error-monitoring function [3][4][5], instead of the usually employed norm-based scalar-valued error functions [6][7][8]. A finite-time Zhang neural network (ZNN) for solving systems of linear equations, based on an efficient activation function, was proposed in [9].…”
mentioning
confidence: 99%
“…This is due to the ease in implementability of NNs, their parallel processing capabilities, and their overall fast error convergence rates. Specifically, recurrent neural network (RNN) models are based on a vector-valued error-monitoring function [3][4][5], instead of the usually employed norm-based scalar-valued error functions [6][7][8]. A finite-time Zhang neural network (ZNN) for solving systems of linear equations, based on an efficient activation function, was proposed in [9].…”
mentioning
confidence: 99%
“…A class of Wang neural networks with exponential convergence is developed to settle many of the mathematical problems from the perspective of control [14]- [16] and specializes in static problems, but those problems in practice are actually dynamical, which is verified in their failure to track the theoretical solution of a dynamic problem due to the lack of velocity compensation of the dynamic parameters process [17]- [19]. In addition, the gradient neural network as a conventional recurrent neural network (RNN) based approach is employed to search the target roots of dynamic problems [20], [21].…”
Section: Introductionmentioning
confidence: 99%