2021
DOI: 10.1007/s10107-020-01606-x
|View full text |Cite
|
Sign up to set email alerts
|

Local convergence of tensor methods

Abstract: In this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established. Global complexity bounds for the Composite Tensor Method in convex and uniformly convex cases are also discussed. Lastly, we… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 18 publications
(21 reference statements)
2
9
0
Order By: Relevance
“…A similar convergence result has been obtained in [6] for a deterministic algorithm derived from the particular surrogate function of Example 3.2 with N = 1 (Taylor expansion with regularization). However, our convergence analysis of SHOM is derived for general surrogates.…”
Section: Local Linear Convergence In Function Valuessupporting
confidence: 75%
See 2 more Smart Citations
“…A similar convergence result has been obtained in [6] for a deterministic algorithm derived from the particular surrogate function of Example 3.2 with N = 1 (Taylor expansion with regularization). However, our convergence analysis of SHOM is derived for general surrogates.…”
Section: Local Linear Convergence In Function Valuessupporting
confidence: 75%
“…In particular, for strongly convex functions and p = 2 we recover the local linear rate from [11]. For the deterministic case, i.e the batch size is equal to the number of functions in the finite-sum, we obtain local superlinear convergence as in [6]. Numerical simulations also confirm the efficiency of our algorithm, i.e.…”
Section: Introductionsupporting
confidence: 58%
See 1 more Smart Citation
“…def = ∇g(T ) + φ (T ) ∈ ∂h(T ). In order to work with these objects, we use the following result (see Lemma 2 in [2]).…”
Section: This Inclusion Justifies Notation H (T )mentioning
confidence: 99%
“…Subsequently, a high-order coordinate descent algorithm was studied in [9], and very recently, the high-order A-NPE framework has been specialized to the strongly convex setting [8], generalizing the discrete-time algorithms in this paper with an improved convergence rate. Beyond the setting of Lipschitz continuous derivatives, high-order algorithms and their accelerated variants have been adapted for more general setting with Hölder continuous derivatives [57,[63][64][65][66] and an optimal algorithm is known [105]. Other settings include structured convex non-smooth minimization [48], convex-concave minimax optimization and monotone variational inequalities [49,97], and structured smooth convex minimization [72,73,93,94].…”
Section: Introductionmentioning
confidence: 99%