1993
DOI: 10.1109/72.286888
|View full text |Cite
|
Sign up to set email alerts
|

On solving constrained optimization problems with neural networks: a penalty method approach

Abstract: Deals with the use of neural networks to solve linear and nonlinear programming problems. The dynamics of these networks are analyzed. In particular, the dynamics of the canonical nonlinear programming circuit are analyzed. The circuit is shown to be a gradient system that seeks to minimize an unconstrained energy function that can be viewed as a penalty method approximation of the original problem. Next, the implementations that correspond to the dynamical canonical nonlinear programming circuit are examined.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
26
0
1

Year Published

1996
1996
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 110 publications
(28 citation statements)
references
References 14 publications
0
26
0
1
Order By: Relevance
“…Also, the output ⃗ ( ) can mostly converge to a reasonable set of stable output of the CNN processor in a percentage of about 99.9%. On the other hand, the convergence speed of CNN processor would be in few miniseconds range [15], [18] since the CNN has an architecture that all neurons are in the same structure, which makes the CNN be suitable for VLSI implementation. Fig.…”
Section: Simulation Results and Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Also, the output ⃗ ( ) can mostly converge to a reasonable set of stable output of the CNN processor in a percentage of about 99.9%. On the other hand, the convergence speed of CNN processor would be in few miniseconds range [15], [18] since the CNN has an architecture that all neurons are in the same structure, which makes the CNN be suitable for VLSI implementation. Fig.…”
Section: Simulation Results and Discussionmentioning
confidence: 99%
“…An energy function at time which decreases along the trajectories of (17), denoted by ℰ( ), is generally expressed by [13], as (18). At the stable state, outputs of neurons will arrive at an equilibrium with the minimum energy function.…”
Section: A Preliminaries For Cellular Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…. Neural network model for solving (9) was developed in [15]. Its dynamical equation is described by It has two layers and because of an additional nonlinear term, it is more complex in structure than the proposed neural network model (10).…”
Section: Comparative Analysismentioning
confidence: 99%