2002
DOI: 10.1137/s1052623400370606
|View full text |Cite
|
Sign up to set email alerts
|

New Sequential and Parallel Derivative-Free Algorithms for Unconstrained Minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
64
0

Year Published

2003
2003
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 57 publications
(65 citation statements)
references
References 28 publications
1
64
0
Order By: Relevance
“…Instead, the models are constructed via least-squares fits [121] or interpolation techniques [72,74,220]. As if this were not confusing enough, the term "derivative-free" appears in the titles of other papers [115,173,174,175] in reference to algorithms that do not explicitly construct a local model of the functions involved.…”
Section: 4mentioning
confidence: 99%
“…Instead, the models are constructed via least-squares fits [121] or interpolation techniques [72,74,220]. As if this were not confusing enough, the term "derivative-free" appears in the titles of other papers [115,173,174,175] in reference to algorithms that do not explicitly construct a local model of the functions involved.…”
Section: 4mentioning
confidence: 99%
“…When all the coordinate directions have been explored (inner for loop, i.e. steps [3][4][5][6][7][8][9][10][11][12], the algorithm computes (by steps [13][14][15][16][17][18] the new values ξ k+1 , ǫ k+1 , and η k+1 for the sufficient reduction, penalty and feasibility violation parameters, respectively. In particular, provided that no discrete variable has been updated and that the tentative steps along discrete coordinates are equal to one, the sufficient reduction parameter is decreased.…”
Section: End Formentioning
confidence: 99%
“…In [20] a linesearch strategy for linearly constrained problems [22] is adopted for the solution of Problem (1). In [9] the derivative free algorithms proposed in [10] are extended to the solution of mixed variable problems with bound constraints only. In [24] a probabilistic method using surrogate models for the optimization of computationally expensive mixed-integer black-box problems is proposed.…”
Section: Introductionmentioning
confidence: 99%
“…(1) has been frequently used by monotone algorithms: given xi, di E En, the algorithm must determine a stepsize Xi so that the new iterate xi+l gives a sufficient decrease in the function value, Under suitable assumptions (Al-A4 below) a (sub)sequence fulfilling (1) converges to a point Z satisfying the first order necessary optimality condition; namely V f ( 5 ) = O [14]. Additional conditions, mainly in the choice of {di)y, are obviously required to ensure a superlinear rate of convergence.…”
Section: Introductionmentioning
confidence: 99%
“…This paper adapts a sufficient decrease condition that does not require the computation of derivatives [5,151. Therefore, it can be used in derivative-free optimization and gradient-related algorithms.…”
Section: Introductionmentioning
confidence: 99%