1990
DOI: 10.1007/bf00939739
|View full text |Cite
|
Sign up to set email alerts
|

A trajectory-following method for unconstrained optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
4

Year Published

1992
1992
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 3 publications
0
5
0
4
Order By: Relevance
“…The network is designed to classify the feature space without teaching through the compactness of each cluster calculated using the Mahalanobis distance measure between the th pixel and the centroid of class and is given as in [5] by (7) where is a -dimentional feature vector of the th pixel (here, for two-channel segmentation and for three-channel segmentation), is the -dimensional Centroid vector of class , and is the covariance matrix of class . By applying (4) to (5), we get a set of equations for neural dynamics given by (8) where and are the input and output of the th neuron, respectively.…”
Section: Implementation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The network is designed to classify the feature space without teaching through the compactness of each cluster calculated using the Mahalanobis distance measure between the th pixel and the centroid of class and is given as in [5] by (7) where is a -dimentional feature vector of the th pixel (here, for two-channel segmentation and for three-channel segmentation), is the -dimensional Centroid vector of class , and is the covariance matrix of class . By applying (4) to (5), we get a set of equations for neural dynamics given by (8) where and are the input and output of the th neuron, respectively.…”
Section: Implementation and Resultsmentioning
confidence: 99%
“…Furthermore, it is possible to find a convenient in order to ensure the network can reach a local minimum after a specified period of time, say , and remain there thereafter [7], [8]. The appropriate selection of the displacement is something of an art, and experimentation (or trial and error) and familiarity with a given class of optimization problems are often required to find the best solution.…”
Section: Our Contributionsmentioning
confidence: 99%
“…The curve of steepest descent given by The application of reparametrizations to nonlinear programming is discussed in [SchäWar90]. A characteristic of a curve is its curvature, which measures the curve's local deviation from a straight line, for any curve point P. The curvature of a twice continuously differentiable curve…”
Section: Theorem 21 Considermentioning
confidence: 99%
“…Generalized descent methods (Anderson andWalsh 1986, Schaffler andWarsitz 1990) continue the search trajectory every time a local solution is found. Their problem is that as more local minima are found, the modified objective function becomes more difficult to minimize.…”
Section: Global Optimization Methodsmentioning
confidence: 99%
“…There are two approaches. First, trajectory methods modify the differential equations describing the local-descent trajectory so that they can escape from local maxima (Anderson and Walsh 1986, Snyman and Fatti 1987, Diener and Schaback 1990, Schaffler and Warsitz 1990, Sturua and Zavriev 1991, Vincent et al 1992. Their advantage is the large number of function evaluations spent in unpromising regions.…”
Section: B Generalized Descent Methodsmentioning
confidence: 99%