1991
DOI: 10.1137/0728014
|View full text |Cite
|
Sign up to set email alerts
|

On the Global Convergence of Trust Region Algorithms Using Inexact Gradient Information

Abstract: Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and R… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
35
0

Year Published

1997
1997
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(35 citation statements)
references
References 14 publications
0
35
0
Order By: Relevance
“…In the absence of inexactness our global convergence theory is that of [10]. If all iterates are feasible, i.e., if all iterates satisfy C(y k u k ) = 0 , then our results are related to the convergence analyses in [3,5] for trust-region methods with inexact function and gradient information for unconstrained optimization.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…In the absence of inexactness our global convergence theory is that of [10]. If all iterates are feasible, i.e., if all iterates satisfy C(y k u k ) = 0 , then our results are related to the convergence analyses in [3,5] for trust-region methods with inexact function and gradient information for unconstrained optimization.…”
Section: Introductionmentioning
confidence: 98%
“…In section 2 we will consider the reduced problem min f(y(u) u ) obtained from (1.1) by eliminating the variables y. We will briefly discuss the convergence analyses in [3] and [5, xx 8.4,10.6] for trust-region methods with inexact function or gradient information for the reduced problem. This will reveal some useful problem information and it will later motivate our assumptions on the inexactness for problem (1.1).…”
Section: Introductionmentioning
confidence: 99%
“…It has been shown, however, that trust region optimizations converge even if inexact model and gradient information is used [7,15,28]. In [33], Yue and Meerbergen relax the stringent firstorder accuracy requirements to consider the general setting of an unconstrained trust region optimization algorithm using surrogate models with the following properties:…”
Section: Convergence Standard Trust Region Convergence Theory Requirmentioning
confidence: 99%
“…These local approximations automatically satisfy first-order consistency conditions (i.e., the approximate model's objective and gradient evaluations are locally exact), which in turn provide guarantees that the resulting optimization solution will satisfy the optimality conditions of the original high-fidelity system. The influence of inexact gradient information is considered in [7,28], and of inexact gradient and function information in [6,8,9]. In [1], the authors consider a trust region framework with more general approximation models of varying fidelity and show how adaptive corrections may be used to achieve the first-order consistency conditions required to achieve a provably convergent formulation for general approximation models.…”
mentioning
confidence: 99%
“…The challenge we address here is twofold, first we must produce a surrogate model that captures local function behavior sufficiently well to prove convergence without requiring a highfidelity gradient estimate, and secondly we must ensure the surrogate captures some global function behavior to speed convergence to a stationary point of the high-fidelity function. A trust region algorithm is convergent provided either the error between the gradient of the function and the gradient of surrogate model is bounded by a constant times the gradient of the function [21] or provided the accuracy of the surrogate can be improved dynamically within the trust region framework [ [22], section 10.6]. Oeuvray [23] showed that a radial basis function interpolation satisfies these criteria, provided the interpolation points satisfy certain conditions.…”
mentioning
confidence: 99%