2021
DOI: 10.1007/s10589-020-00259-y
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces

Abstract: For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonco… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 54 publications
(84 reference statements)
1
21
0
Order By: Relevance
“…There are results for projected gradient methods, see, e.g., [14,17]. Recently, a stochastic version of the algorithm was analyzed in [16]. However, in these papers no convergence results for weakly converging subsequences of iterates are given.…”
Section: Introductionsupporting
confidence: 70%
“…There are results for projected gradient methods, see, e.g., [14,17]. Recently, a stochastic version of the algorithm was analyzed in [16]. However, in these papers no convergence results for weakly converging subsequences of iterates are given.…”
Section: Introductionsupporting
confidence: 70%
“…In this paper, we pursued our investigation of [7] about stochastic optimization problems in Hilbert space with focus on asymptotic convergence for stochastic gradient methods. In contrast to the first paper, which considered nonconvex non-smooth optimization problems, we analyze here nonconvex, but smooth, stochastic optimization problems.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast to the first paper, which considered nonconvex non-smooth optimization problems, we analyze here nonconvex, but smooth, stochastic optimization problems. Unlike [7], where a control constraint was present and the proposed algorithm included a proximal step, the convergence analysis here is considerably simpler. The semilinear elliptic PDE problem is subject to Neumann boundary conditions and is slightly more general than in [7].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations