2020
DOI: 10.48550/arxiv.2001.01329
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Proximal Gradient Methods for Nonconvex Problems in Hilbert Spaces

Caroline Geiersbach,
Teresa Scarinci

Abstract: For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonco… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…It is critical to incorporate this uncertainty in the optimization problem to make the optimal solution more reliable and robust. Optimization under uncertainty has become an important research area and received increasing attentions in recent years [74,10,45,41,70,76,49,26,78,19,52,63,20,51,48,8,3,5,84,85,42,69,53,50,55,33,59,47,79,80,81,27,18,86,82,35,54,61,6,38,39,31,37,36]. To account for the uncertainty in the optimization problem, different statistical measures of the objective function have been studied, e.g., mean, variance, conditional value-at-risk, worst case scenario, etc., [70,48,85,3,…”
Section: Introductionmentioning
confidence: 99%
“…It is critical to incorporate this uncertainty in the optimization problem to make the optimal solution more reliable and robust. Optimization under uncertainty has become an important research area and received increasing attentions in recent years [74,10,45,41,70,76,49,26,78,19,52,63,20,51,48,8,3,5,84,85,42,69,53,50,55,33,59,47,79,80,81,27,18,86,82,35,54,61,6,38,39,31,37,36]. To account for the uncertainty in the optimization problem, different statistical measures of the objective function have been studied, e.g., mean, variance, conditional value-at-risk, worst case scenario, etc., [70,48,85,3,…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, it seems that most of the (convergence) results in this direction so far is based on Banach's fixed point theorem for contracting mappings, and applies only to maps which have some types of monotonicity. For systematic development in GD, one can see for example the very recent work [6], where the problem is of stochastic nature, the space is a Hilbert space, and f is either in C 1,1 L (in which case only the results lim n→∞ ||x n+1 − x n || = 0 is proven); or when with additional assumption on learning rates n δ n = ∞ and n δ 2 n < ∞, where convergence to 0 of {∇f (x n )} can be proven. Some other references are [4] (where the function is assumed to be strongly convex) and [1] (where the VMPT method is used), where again only lim n→∞ ||∇f (z n )|| = 0 is considered, and no discussion of (weak or strong) convergence of {x n } itself or avoidance of saddle points are given.…”
mentioning
confidence: 99%