2016
DOI: 10.1007/s10957-016-0999-6
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
51
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 65 publications
(56 citation statements)
references
References 9 publications
2
51
0
Order By: Relevance
“…The expression in (16) is obtained by adding the upper bounds on k 1 and k 2 in (27) and (29). Now assume that property (ii) holds.…”
Section: Technical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The expression in (16) is obtained by adding the upper bounds on k 1 and k 2 in (27) and (29). Now assume that property (ii) holds.…”
Section: Technical Resultsmentioning
confidence: 99%
“…Further to this work, the same authors describe an intermediate gradient method that uses an inexact oracle [26]. That work is extended in [27] to handle the case of composite functions, where a stochastic inexact oracle is also introduced. A method based on inexact dual gradient information is introduced in [28], while [2] considers the minimization of an unconstrained, convex, and composite function, where error is present in the gradient of the smooth term or in the proximity operator for the non-smooth term.…”
Section: Introductionmentioning
confidence: 99%
“…then (21a) represents Nesterov's method (2c). For gradient descent (2a), we can alternatively use ψ t = z t = y t := x t with the corresponding matrices In what follows, we demonstrate how property (19) of the nonlinear mapping ∆ allows us to obtain upper bounds on J when system (21a) is driven by the white stochastic input w t with zero mean and identity covariance. for some matrix Π, let X be a positive semidefinite matrix, and let λ be a nonnegative scalar such that system (21a)…”
Section: General Strongly Convex Problemsmentioning
confidence: 94%
“…Summing up inequalities (37) Similar to the first part of the proof of Lemma 1, we can use LMI (24) and inequality (19) to write…”
Section: Proof Of the Bounds Inmentioning
confidence: 99%
“…which is extended in [6] and [8], analyzes the convergence of common gradient based-methods when gradient or gradient-type mappings are computed inexactly.…”
Section: Introductionmentioning
confidence: 99%