2020
DOI: 10.1080/02331888.2020.1764557
|View full text |Cite
|
Sign up to set email alerts
|

On the rates of convergence of parallelized averaged stochastic gradient algorithms

Abstract: The growing interest for high dimensional and functional data analysis led in the last decade to an important research developing a consequent amount of techniques. Parallelized algorithms, which consist in distributing and treat the data into different machines, for example, are a good answer to deal with large samples taking values in high dimensional spaces. We introduce here a parallelized averaged stochastic gradient algorithm, which enables to treat efficiently and recursively the data, and so, without t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 39 publications
(64 reference statements)
0
7
0
Order By: Relevance
“…Indeed, if m √ N the minimum in the exponent in (8) is achieved on the first term, and no improvement in the bound is guaranteed. Moreover, the similar limit m ≃ √ N can be obtained via SA-based method from [17] with the guarantee (6).…”
Section: Introductionmentioning
confidence: 79%
See 2 more Smart Citations
“…Indeed, if m √ N the minimum in the exponent in (8) is achieved on the first term, and no improvement in the bound is guaranteed. Moreover, the similar limit m ≃ √ N can be obtained via SA-based method from [17] with the guarantee (6).…”
Section: Introductionmentioning
confidence: 79%
“…But it is not known whether it is accurate or not, since there is no method on which it reached. In the paper [17] authors get a better convergence rate considering non-accelerated parallelized SGD with specific step size and only one communication at the end, assuming stronger local smoothness of objective near the solution…”
Section: A Sota Approaches For Distributed Stochastic Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…k and which can be so recursively invert with the help of Riccati/Shermann-Morrisson's formula (see Bercu et al (2020); Boyer and Godichon-Baggioni (2020); Godichon-Baggioni et al (2022)). In order to verify (H1b), one can consider the following version of the estimate of the Hessian…”
Section: A First Convergence Resultsmentioning
confidence: 99%
“…k where e k is the k-th (modulo d) canonical vector (see Bercu et al (2021); Godichon-Baggioni et al (2022)). We can now obtain a first rate of convergence of the estimates.…”
Section: A First Convergence Resultsmentioning
confidence: 99%