2017
DOI: 10.1007/s10107-017-1137-4
|View full text |Cite
|
Sign up to set email alerts
|

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

Abstract: We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We show that in terms of the order of the accuracy, the evaluation complexity of such a method is the same as its counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. We particularize and improve these r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
221
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 121 publications
(225 citation statements)
references
References 19 publications
3
221
0
1
Order By: Relevance
“…It is based on an inner product test that ensures that the search direction in (1.4) is a descent direction with high probability. In contrast to the test studied in [6][7][8]14], which we call the norm test, and which controls both the direction and length of the gradient approximation and promotes search directions that are close to the true gradient, the inner product test places more emphasis on generating descent directions and allows more freedom in their length. The numerical results presented in Section 5 suggest that the inner product test is efficient in practice, but in order to establish a Q-linear convergence rate for strongly convex functions, we must reinforce it with an additional mechanism that prevents search directions from becoming nearly orthogonal to the true gradient ∇F (x k ).…”
Section: Introductionmentioning
confidence: 99%
“…It is based on an inner product test that ensures that the search direction in (1.4) is a descent direction with high probability. In contrast to the test studied in [6][7][8]14], which we call the norm test, and which controls both the direction and length of the gradient approximation and promotes search directions that are close to the true gradient, the inner product test places more emphasis on generating descent directions and allows more freedom in their length. The numerical results presented in Section 5 suggest that the inner product test is efficient in practice, but in order to establish a Q-linear convergence rate for strongly convex functions, we must reinforce it with an additional mechanism that prevents search directions from becoming nearly orthogonal to the true gradient ∇F (x k ).…”
Section: Introductionmentioning
confidence: 99%
“…We note here that if p f = 1 then Assumption 2.4(iii) is not needed and condition p g > 1/2 is sufficient for the convergence results. This case can be considered as an extension of results in [6]. Before concluding this section, we state a result showing the relationship between the variance assumption on the function values and the probability of inaccurate estimates.…”
Section: Random Gradient and Function Estimatesmentioning
confidence: 85%
“…Before delving into the convergence statement and proof, we state some lemmas similar to those derived in [2,6,7].…”
Section: Useful Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inexact Hessian information is considered in [16,5,30,31], approximate gradient and Hessian evaluations are used in [12,15,27,32], function, gradient and Hessian values are sampled in [22,6]. The amount of inexactness allowed is controlled dynamically in [12,15,22,16,5].Contributions. The present paper proposes an extension of the unifying framework of [13] for unconstrained or inexpensively-constrained problems that allows inexact evaluations of the objective function and of the required derivatives, in an adaptive way inspired by the trust-region scheme of [17, Section 10.6].…”
mentioning
confidence: 99%