A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β-Hölder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould and Toint [Sharp worst-case evaluation complexity bounds for arbitraryorder nonconvex optimization with inexpensive constraints, arXiv:1811.01220, 2018] on the evaluation complexity to the inexact case: if a q-th order minimizer is sought using approximations to the first p derivatives, it is proved that a suitable approximate minimizer within ǫ is computed by the proposed algorithm in at most O ǫ − p+β p−q+β iterations and at most O | log(ǫ)|ǫ − p+β p−q+β approximate evaluations. An algorithmic variant, although more rigid in practice, can be proved to find such an approximate minimizer in O | log(ǫ)| + ǫ − p+β p−q+β evaluations. While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first-and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.
Bellavia, Gurioli, Morini, Toint: Adaptive Regularization Algorithms with Inexact Evaluations2riving formal bounds on the number of evaluations of the objective function (and possibly of its derivatives) necessary to obtain approximate optimal solutions within a user-specified accuracy. Until recently, the results had focused on methods using first-and second-order derivatives of the objective function, and on convergence guarantees to first-or second-order stationary points [29,23,24,19,11]. Among these contributions, [24,11] analyzed the "regularization method", in which a model of the objective function around a given iterate is constructed by adding a regularization term to the local Taylor expansion, model which is then approximately minimized in an attempt to find a new point with a significantly lower objective function value [21]. Such methods have been shown to possess optimal evaluation complexity [14] for first-and second-order models and minimizers, and have generated considerable interest in the research community. A theoretically significant step was made in [7] for unconstrained problems, where evaluation complexity bounds were obtained for convergence to first-order stationary points of a simplified regularization method using models of arbitrary degree...