2016
DOI: 10.1007/s10107-015-0969-z
|View full text |Cite
|
Sign up to set email alerts
|

Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Since only part blocks are updated at one iteration of the BCD algorithm [37,38], it has low per-iteration cost and small memory footprint. There are three mainstream types of BCD algorithms: classical BCD [20,41], proximal BCD [20,45] and proximal gradient BCD [22,23]. The classical BCD algorithm is to do the exact minimization on objective function at each iteration by fixing most components of the variable vector x at their values from the current iteration.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since only part blocks are updated at one iteration of the BCD algorithm [37,38], it has low per-iteration cost and small memory footprint. There are three mainstream types of BCD algorithms: classical BCD [20,41], proximal BCD [20,45] and proximal gradient BCD [22,23]. The classical BCD algorithm is to do the exact minimization on objective function at each iteration by fixing most components of the variable vector x at their values from the current iteration.…”
Section: Introductionmentioning
confidence: 99%
“…Subsequently, Yun, Tseng and Toh [48] extended the BCGD algorithm proposed in [42] to the problem from R n to R m×n , which requires an additional convexity assumption on G. For the same type of problem, Dai and Weng [19] brought forward a synchronous parallel block coordinate descent algorithm with a randomized variant. Additionally, when G is not assumed to be convex, Hua and Yamashita [23] discussed a class of block coordinate proximal gradient algorithms based on the Bregman functions, which may be different at each iteration. With an alternating minimization strategy, a forward-backward algorithm was proposed in [15], and the corresponding sequence is proved to be convergent to a stationary point of the considered problem.…”
Section: Introductionmentioning
confidence: 99%
“…Based upon this point, the stationarity of any accumulation point follows. This methodology applies to more general frameworks, such as the block successive minimization in [27] and the Bregmandistance-based block coordinate proximal gradient methods in [13,30]. Furthermore, with the aid of the Lojasiewicz property that is shared by a broad swath of functions, one could obtain the iterate convergence in more generic settings; see, e.g., [5,31].…”
mentioning
confidence: 99%
“…Some of them (repeatedly), in one outer iteration, solve the subproblem inexactly to obtain a descent direction and then perform line search; see, e.g., [6,33]. In [13], the authors treat the solution error as an additional term in the kernel function defining the Bregman distance, and then impose assumptions on the solution errors to invoke the results established in the exact settings. In [10,23], the authors put flexibility in solving (1.3) in the sense that the relative error conditions are relaxed while maintaining the sufficient reduction property.…”
mentioning
confidence: 99%
See 1 more Smart Citation