2018 IEEE Conference on Decision and Control (CDC) 2018
DOI: 10.1109/cdc.2018.8619183
|View full text |Cite
|
Sign up to set email alerts
|

Variance Amplification of Accelerated First-Order Algorithms for Strongly Convex Quadratic Optimization Problems

Abstract: We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation.Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-square error in the optimization variable when the iterates are perturbed by additive white noise. This type of uncertainty may arise in situations where an approximation of the gradient is sought through measurements of a real system or in a distributed computation over network. Even though the under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 37 publications
(94 reference statements)
1
6
0
Order By: Relevance
“…The fastest method in terms of convergence rates, the Triple Momentum Method, however, has the worst noise attenuation. We note that these results have to be taken with care since they only provide upper bounds; still, they are qualitatively in accordance with the results from [26], where a similar performance channel has been analyzed for quadratic optimization problems.…”
Section: Parameterssupporting
confidence: 84%
See 1 more Smart Citation
“…The fastest method in terms of convergence rates, the Triple Momentum Method, however, has the worst noise attenuation. We note that these results have to be taken with care since they only provide upper bounds; still, they are qualitatively in accordance with the results from [26], where a similar performance channel has been analyzed for quadratic optimization problems.…”
Section: Parameterssupporting
confidence: 84%
“…Theorem 4. Let L ≥ m > 0 and let ∆ ρ (m, L) be defined as in (26). Then, for each ρ ∈ (0, 1], ∆ ρ satisfies the IQC defined by Π for each ∆ ρ ∈ ∆ ρ (m, L) and each…”
Section: Definition 6 (Doubly Hyperdominant Matrix) a Matrixmentioning
confidence: 99%
“…These inequalities in conjunction with ( 16) yield ( ψ t / ψ 0 ) 2 ≤ λ max (P )/λ min (P ). Finally, (15) follows from combining this inequality with A t = (A T ) t = sup ψ 0 = 0 ψ t / ψ 0 .…”
Section: A Quadratic Optimization Problemsmentioning
confidence: 95%
“…Moreover, it was also recently shown in [11] that for the case when (p) = 2p+1 and h(x) = 0.5|x| 2 , Runge-Kutta discretization methods applied to (2) can generate discrete-time algorithms that achieve acceleration. While these results have been instrumental in the analysis and design of various optimization algorithms with provable acceleration and convergence properties, the study of the robustness properties of these algorithms has been considered only recently [12], [13], [14], [5], [15]. Indeed, as it has been noted in the literature, e.g., [16], [8], dynamics of the form (2) may become unstable under small disturbances or even under forward Euler discretization.…”
Section: Introductionmentioning
confidence: 99%