1995
DOI: 10.1109/72.363446
|View full text |Cite
|
Sign up to set email alerts
|

Solving linear programming problems with neural networks: a comparative study

Abstract: In this paper we study three different classes of neural network models for solving linear programming problems. We investigate the following characteristics of each model: model complexity, complexity of individual neurons, and accuracy of solutions. Simulation examples are given to illustrate the dynamical behavior of each model.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
19
0
4

Year Published

1998
1998
2018
2018

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(23 citation statements)
references
References 21 publications
0
19
0
4
Order By: Relevance
“…Unfortunately, although we have given the dynamic output feedback control law (13) for solving the LSSP, it is useless for stabilizing nonlinear programming neural networks since the equilibrium point of nonlinear programming neural networks cannot be known a priori. Therefore, a new output feedback control law which is independent of the equilibrium point must be sought to stabilize nonlinear programming neural networks.…”
Section: Stabilizing Recurrent Neural Network For Nonlinear Programmmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, although we have given the dynamic output feedback control law (13) for solving the LSSP, it is useless for stabilizing nonlinear programming neural networks since the equilibrium point of nonlinear programming neural networks cannot be known a priori. Therefore, a new output feedback control law which is independent of the equilibrium point must be sought to stabilize nonlinear programming neural networks.…”
Section: Stabilizing Recurrent Neural Network For Nonlinear Programmmentioning
confidence: 99%
“…Other gradient-based architectures related to penalty functions include the switched capacitor neural networks proposed in [10], the neural network proposed in [11] that is based on the exact penalty function, and a multitude of network architectures given in [12] for solving constrained optimization problems. In [13], various combinations of the L 1 , L 2 ,and L ∞ penalty functions are used to obtain a class of neural networks that are rigorously analyzed in [14]. The nonlinear programming circuit has been generalized for solving non-smooth optimization problems [15] and applied to quadratic and linear programming problems with strong convergence results [16].…”
Section: Introductionmentioning
confidence: 99%
“…Since then, neural networks have been applied to various optimization problems, including linear programming, nonlinear programming, variational inequalities, and linear and nonlinear complementarity problems; see [6,8,7,15,17,18,22,24,[31][32][33][34][35]. There have been many studies on neural-network approaches to real-world problems in some other fields, such as [26,27,36].…”
Section: Introductionmentioning
confidence: 99%
“…One promising approach for solving optimization problems in real time is to use neural networks (see, e.g., Tank & Hopfield, 1986;Maa & Shanblatt, 1992;Wang, 1993). In the past two decades, many recurrent neural network models have been developed for solving linear and nonlinear programming problems, demonstrating many computational advantages (see, e.g., Forti & Tesi, 1995;Zak, Upatising, & Hui, 1995;Xia & Wang, 2000;Leung, Chen, Jiao, Gao, & Leung, 2001;Xia, 2004;Xia & Feng, 2005;Gao & Liao, 2006;Xia & Ye, 2008;Liu & Wang, 2008a). In particular, Tank and Hopfield (1986) proposed a recurrent neural network for solving linear programming problems, which opened the avenue of solving optimization problems by using recurrent neural networks.…”
mentioning
confidence: 99%