1999
DOI: 10.1137/s1052623497326095
|View full text |Cite
|
Sign up to set email alerts
|

Decreasing Functions with Applications to Penalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
43
0

Year Published

2001
2001
2010
2010

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 81 publications
(44 citation statements)
references
References 10 publications
1
43
0
Order By: Relevance
“…tolerance = 10 −6 is chosen for the Newton iteration To conclude this section, we comment that from Tables 2, 3 and 4, it is clear that the l 2 penalty method needs a much smaller penalty parameter than those used in l 1/2 and l 1 penalty methods in order to archive a comparable accuracy to those from the latter two methods. This confirms the theoretical results in the precious section as well in [8]. Furthermore, the computed convergence rates for the l 2 penalty method are respectively one and two order higher than those of of the l 1 and l 1/2 penalty methods, as predicted by the theoretical result in Theorem 4.…”
Section: Penalty Methodssupporting
confidence: 88%
See 2 more Smart Citations
“…tolerance = 10 −6 is chosen for the Newton iteration To conclude this section, we comment that from Tables 2, 3 and 4, it is clear that the l 2 penalty method needs a much smaller penalty parameter than those used in l 1/2 and l 1 penalty methods in order to archive a comparable accuracy to those from the latter two methods. This confirms the theoretical results in the precious section as well in [8]. Furthermore, the computed convergence rates for the l 2 penalty method are respectively one and two order higher than those of of the l 1 and l 1/2 penalty methods, as predicted by the theoretical result in Theorem 4.…”
Section: Penalty Methodssupporting
confidence: 88%
“…In the case of k = 2, this is the l 2 penalty method which is also a so-called lower order penalty method ( [8]). In this case, (12) becomes…”
Section: Penalty Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The terminology "nonlinear" refers to the nonlinearity of the objective function of the transformed problems with respect to the objective function of the original constrained optimization problem. The exact penalization result for nonconvex inequality constrained single objective optimization was obtained under a generalized calmness condition in [18]. It is worth noting that the early study on the nonlinear Lagrangian can be found in the work [23].…”
Section: Introductionmentioning
confidence: 99%
“…The condition used is the lower semicontinuity of the functions involved, which is much weaker than the continuity assumption in [17]. Moreover, the conditions for exact penalization are a generalization of the ones for single objective optimization in [3,4,15,18]. It is worth noting that nonlinear Lagrangian dual problems studied in this paper provide new models for convex composite optimization problems studied in [6,7,21].…”
Section: Introductionmentioning
confidence: 99%