2022
DOI: 10.48550/arxiv.2201.01652
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic regularized majorization-minimization with weakly convex and multi-convex surrogates

Abstract: Stochastic majorization-minimization (SMM) is an online extension of the classical principle of majorization-minimization, which consists of sampling i.i.d. data points from a fixed data distribution and minimizing a recursively defined majorizing surrogate of an objective function. In this paper, we introduce stochastic block majorization-minimization, where the surrogates can now be only block multi-convex and a single block is optimized at a time within a diminishing radius. Relaxing the standard strong con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
16
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(16 citation statements)
references
References 15 publications
0
16
0
Order By: Relevance
“…8.1. This improves the rate of convergence of stochastic algorithms for constrained nonconvex expected loss minimization with dependent data [Lyu22], see Thm. 8.1 for the details.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…8.1. This improves the rate of convergence of stochastic algorithms for constrained nonconvex expected loss minimization with dependent data [Lyu22], see Thm. 8.1 for the details.…”
Section: Related Workmentioning
confidence: 99%
“…Stochastic (sub)gradient descent (SGD) is also recently considered in [WPT + 21] for convex problems. In the constrained nonconvex case, the work [LNB20] showed asymptotic guarantees of stochastic majorization-minimization (SMM)-type algorithms to stationary points of the expected loss function and the recent work [Lyu22] showed nonasymptotic guarantees. More recently, [Lyu22] studied a generalized SMM-type algorithms and showed the complexity Õ(ε −8 ) in the general case and Õ(ε −4 ) when all the iterates of the algorithm lie in the interior of the constraint set, for making the stationarity gap (see LHS of (3)) for the expected loss function less than ε.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations