2015
DOI: 10.1137/14095707x
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Rates and Decoupling in Linear Stochastic Approximation Algorithms

Abstract: Almost sure convergence rates for linear algorithmsare symmetric, positive semidefinite random matrices and {b k } ∞ k=1 are random vectors. It is shown that |hn − A −1 b| = o(n −γ ) a.s. for the γ ∈ [0, χ), positive definite A and vector b such that, 1 , these assumptions are implied by the Marcinkiewicz strong law of large numbers, which allows the {A k } and {b k } to have heavytails, long-range dependence or both. Finally, corroborating experimental outcomes and decreasing-gain design considerations are pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 32 publications
0
12
0
Order By: Relevance
“…Stochastic Approximation (SA) algorithms solve stochastic optimization problems like the mean-square optimization problem (1.4). Our application is similar to the SA framework of Kouritzin (1996) and Kouritzin & Sadeghi (2015). Suppose {(L j , S j , V j , Z j )} N j=1 are i.i.d.…”
Section: Algorithms and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Stochastic Approximation (SA) algorithms solve stochastic optimization problems like the mean-square optimization problem (1.4). Our application is similar to the SA framework of Kouritzin (1996) and Kouritzin & Sadeghi (2015). Suppose {(L j , S j , V j , Z j )} N j=1 are i.i.d.…”
Section: Algorithms and Resultsmentioning
confidence: 99%
“…We choose a reasonable scalar γ. However, a more general step size γ/k α in place of γ/k (see Kouritzin & Sadeghi 2015 for a discusssion), a (positive definite) matrix-valued γ or a two step algorithm like that introduced in Polyak & Juditsky (1992) may improve performance further.…”
Section: Algorithms and Resultsmentioning
confidence: 99%
“…We remark that (20) and (23) we have that x(s) converges to H −1 y in mean square. By (25) and (18) we get Eȳ = Hx(0) and E ȳ 2 2 < ∞. For the case that ρ min (P ) = 1, which implies I n−r − D is a Hurwitz matrix.…”
Section: B Sufficient Convergence Conditions and Convergence Ratesmentioning
confidence: 95%
“…Later, Tadić relaxed the boundary condition of P (s) and provided some convergence rates based on (3) and the assumption that the real parts of the eigenvalues of P + αI n are all less than 1, where α is a positive constant [39]. Additionally, there are results on convergence rates by assuming that {I n −P (s)} s≥0 are a sequence of positive semidefinite matrices and I n − P is a positive definite matrix [24], [25]. Another thread in the theoretical research on system (2) is to consider its consensus behavior where {P (s)} and {u(s)} are assumed to be row-stochastic matrices and zero-mean noises respectively [6], [19], [28].…”
Section: B Linear Sa Algorithms Over Random Networkmentioning
confidence: 99%
See 1 more Smart Citation