2016
DOI: 10.1090/mcom/3178
|View full text |Cite
|
Sign up to set email alerts
|

Penalty methods with stochastic approximation for stochastic nonlinear programming

Abstract: In this paper, we propose a class of penalty methods with stochastic approximation for solving stochastic nonlinear programming problems. We assume that only noisy gradients or function values of the objective function are available via calls to a stochastic first-order or zeroth-order oracle. In each iteration of the proposed methods, we minimize an exact penalty function which is nonsmooth and nonconvex with only stochastic first-order or zeroth-order information available. Stochastic approximation algorithm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 38 publications
(59 reference statements)
0
9
0
Order By: Relevance
“…In [9], a stochastic block mirror descent method, which incorporates the block-coordinate decomposition scheme into stochastic mirror-descent methodology, was proposed for a nonconvex stochastic optimization problem x * = argmin{f (x) : x ∈ X} with X having a block structure. More recently, Wang et al [44] proposed a penalty method for nonconvex stochastic optimization problems with nonlinear constraints, and also analyzed its SF O-calls complexity.…”
mentioning
confidence: 99%
“…In [9], a stochastic block mirror descent method, which incorporates the block-coordinate decomposition scheme into stochastic mirror-descent methodology, was proposed for a nonconvex stochastic optimization problem x * = argmin{f (x) : x ∈ X} with X having a block structure. More recently, Wang et al [44] proposed a penalty method for nonconvex stochastic optimization problems with nonlinear constraints, and also analyzed its SF O-calls complexity.…”
mentioning
confidence: 99%
“…Optimization with functional constraints. There have recently appeared a number of papers that develop first-order methods for minimizing smooth (stochastic) objectives over sets cut out by smooth deterministic inequalities [13,26,44]. The convergence guarantees in these works are all stated in terms of a KKT-residual measure, and therefore are not directly comparable to the ones we obtain here.…”
Section: Related Workmentioning
confidence: 85%
“…On the other hand, given that there are a lot of problems yielding only input-output information, an interesting approach within SA and SAA frameworks is based on zero order information, [17]. Constrained problems are always of great interest and some recent research of penalty methods within SA methodology is presented in [39]. Projection and filter methods with variable sample size might be a valuable topic of future research.…”
Section: Discussionmentioning
confidence: 99%