2010
DOI: 10.1162/neco_a_00029
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming

Abstract: In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
9
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(12 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…Taking advantage of these features presented by the discontinuous systems, the RNN design has been stated as sliding mode control problem [19] and several recurrent neural networks have been proposed using different discontinuous activation functions as hard-limiting [20]- [22], Heaviside [23] and dead-zone [24], [25]. Further results on networks with these dynamical properties were presented in [26]- [29], where the analysis is based on the theory of differential inclusions and differential equations with discontinuous right-hand [30]- [32]. In addition, a class of RNN with fixed time convergence have been recently proposed [33], providing convergence in a finite time that does not depends on the network initial condition [34], [35].…”
Section: Introductionmentioning
confidence: 99%
“…Taking advantage of these features presented by the discontinuous systems, the RNN design has been stated as sliding mode control problem [19] and several recurrent neural networks have been proposed using different discontinuous activation functions as hard-limiting [20]- [22], Heaviside [23] and dead-zone [24], [25]. Further results on networks with these dynamical properties were presented in [26]- [29], where the analysis is based on the theory of differential inclusions and differential equations with discontinuous right-hand [30]- [32]. In addition, a class of RNN with fixed time convergence have been recently proposed [33], providing convergence in a finite time that does not depends on the network initial condition [34], [35].…”
Section: Introductionmentioning
confidence: 99%
“…In this section, we apply the proposed method to solve a nonconvex optimization problem which arises in portfolio selection [25], neural network [26,27], and compressed sensing [28]. Some preliminary numerical results are reported to demonstrate the feasibility and advantage of the method.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…For example, a deterministic annealing neural network was proposed for solving convex programming problems (Wang, 1994), a Lagrangian network was developed for solving convex optimization problems with linear equality constraints based on the Lagrangian optimality conditions (Xia, 2003), the primal-dual network (Xia, 1996), the dual network (Xia, Feng, & Wang, 2004), and the simplified dual network (Liu & Wang, 2006) were developed for solving convex optimization problems based on the Karush-Kuhn-Tucker optimality conditions, projection neural networks were developed for constrained optimization problems based on the projection method (Gao, 2004;Hu & Wang, 2007;Liu, Cao, & Chen, 2010;Xia, Leung, & Wang, 2002). In recent years, neurodynamic optimization approaches have been extended to nonconvex and generalized convex optimization problems.…”
Section: Introductionmentioning
confidence: 99%