2004
DOI: 10.1016/j.neunet.2004.05.006
|View full text |Cite
|
Sign up to set email alerts
|

A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
46
0
1

Year Published

2006
2006
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 159 publications
(47 citation statements)
references
References 11 publications
0
46
0
1
Order By: Relevance
“…Theoretically, the -WTA network (21) as well as network (18) do not need to choose any parameter (the scaling factor can be any positive number), which can be deemed as a great advantage over networks (15) and (16). In addition, the two networks (18) and (21) allow for some equal input signals [see (17)], a situation often encountered in practice but excluded by (15) and (16).…”
Section: Model Comparisonsmentioning
confidence: 99%
See 4 more Smart Citations
“…Theoretically, the -WTA network (21) as well as network (18) do not need to choose any parameter (the scaling factor can be any positive number), which can be deemed as a great advantage over networks (15) and (16). In addition, the two networks (18) and (21) allow for some equal input signals [see (17)], a situation often encountered in practice but excluded by (15) and (16).…”
Section: Model Comparisonsmentioning
confidence: 99%
“…Then, we compare the structural complexities of the four -WTA networks (15), (16), (18), and (21). Because all of them are of complexity, it is necessary to give a more accurate estimation of the number of elements in them.…”
Section: Model Comparisonsmentioning
confidence: 99%
See 3 more Smart Citations