1993
DOI: 10.1002/cta.4490210408
|View full text |Cite
|
Sign up to set email alerts
|

Neural networks for constrained optimization problems

Abstract: This paper is concerned with utilizing neural networks and analog circuits to solve constrained optimization problems. A novel neural network architecture is proposed for solving a class of nonlinear programming problems. The proposed neural network, or more precisely a physically realizable approximation, is then used to solve minimum norm problems subject to linear constraints. Minimum norm problems have many applications in various areas, but we focus on their applications to the control of discrete dynamic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

1996
1996
2010
2010

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 13 publications
(16 reference statements)
0
8
0
1
Order By: Relevance
“…We chose , and and the corresponding constrained optimization problem becomes (16) subject to (17) where , , . This example refers to a low-pass filter with a small area of pass-ability.…”
Section: Examplementioning
confidence: 99%
See 1 more Smart Citation
“…We chose , and and the corresponding constrained optimization problem becomes (16) subject to (17) where , , . This example refers to a low-pass filter with a small area of pass-ability.…”
Section: Examplementioning
confidence: 99%
“…The above discussed approach implicitly utilizes the penalty function method [11], [16], [29] where a constrained optimization problem is approximated by an unconstrained optimization problem. In [17], the authors have used the penalty function method approach and synthesize a new neural optimization network capable of solving a general class of constrained optimization problems. The proposed architecture can be viewed as a continuous NN model and in [18], the authors used the MATLAB with SIMULINK software package for modeling and simulations of its behavior.…”
Section: Introductionmentioning
confidence: 99%
“…P2: For y e belonging to sufficiently small neighborhood of the equilibrium point y * e , lim t→∞ e(t) = 0. Note that the system (8) may be expanded in the Taylor series forṁ y = Ay + Bw + φ(y, w) e = Cy + Dw + ψ(y, w), (11) where φ(y, w) and ψ(y, w) vanish at the equilibrium point with their first order derivatives, and A, B, C, D are matrices defined by…”
Section: Definition Iii-3 Local Simplified Servomechanism Problem(lssp)mentioning
confidence: 99%
“…Both the Tank and Hopfield network [6] and the Chua and Lin nonlinear programming circuit [7][8][9] can be demonstrated to be gradient dynamical systems based on the L 2 penalty function. Other gradient-based architectures related to penalty functions include the switched capacitor neural networks proposed in [10], the neural network proposed in [11] that is based on the exact penalty function, and a multitude of network architectures given in [12] for solving constrained optimization problems. In [13], various combinations of the L 1 , L 2 ,and L ∞ penalty functions are used to obtain a class of neural networks that are rigorously analyzed in [14].…”
Section: Introductionmentioning
confidence: 99%
“…Kennedy and Chua [50] extended the results of Tank and Hopfield method to the general NLP problems. Lillo et al [51] introduced a continuous nonlinear neural network model architecture based on the penalty method to solve constrained optimization problems. The idea behind the penalty method is to approximate a constrained optimization problem by an unconstrained problem (see [52] for more details).…”
Section: Neural Network Based Methodsmentioning
confidence: 99%