2018
DOI: 10.1007/s10898-018-00730-5
|View full text |Cite
|
Sign up to set email alerts
|

Generalized forward–backward splitting with penalization for monotone inclusion problems

Abstract: We introduce a generalized forward-backward splitting method with penalty term for solving monotone inclusion problems involving the sum of a finite number of maximally monotone operators and the normal cone to the nonempty set of zeros of another maximally monotone operator. We show weak ergodic convergence of the generated sequence of iterates to a solution of the considered monotone inclusion problem, provided that the condition corresponding to the Fitzpatrick function of the operator describing the set of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…(ii) Note that the hypothesis (H3) is a relaxation of Assumption 4.1 (S3) in [22]. In fact, the superior limit in [22] is bounded above by 1 Lg , but in this work it can be extended to 2 Lg . This allows us to consider more larger parameters (α k ) k≥1 and (β k ) k≥1 .…”
Section: Preliminariesmentioning
confidence: 94%
See 2 more Smart Citations
“…(ii) Note that the hypothesis (H3) is a relaxation of Assumption 4.1 (S3) in [22]. In fact, the superior limit in [22] is bounded above by 1 Lg , but in this work it can be extended to 2 Lg . This allows us to consider more larger parameters (α k ) k≥1 and (β k ) k≥1 .…”
Section: Preliminariesmentioning
confidence: 94%
“…Remark 6 Algorithm 1 is different from [22,Algorithm 3.1]. In fact, in our previous work, we consider the problem (1) in the sense that h = m i=1 h i and perform the gradient ∇h(x k ) at each iteration k. However, the iterative scheme proposed here allows us to perform the gradient ∇h i (ϕ i,k ) at each sub-iteration i.…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…Another technique to consider the problem LRMOP (ŝ, ε) is a bilevel optimization. For more details, we refer the reader to see [26][27][28].…”
Section: Remarkmentioning
confidence: 99%