1963
DOI: 10.1137/0111043
|View full text |Cite
|
Sign up to set email alerts
|

Minimizing Certain Convex Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0
1

Year Published

1974
1974
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(45 citation statements)
references
References 0 publications
0
42
0
1
Order By: Relevance
“…Note that, by Lemma 5.6 (or simply from (38) and (40) , i = 1, 2. By taking the limits for n → ∞ in (37)- (40) we obtain…”
Section: ) If and Only If There Exists Anmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that, by Lemma 5.6 (or simply from (38) and (40) , i = 1, 2. By taking the limits for n → ∞ in (37)- (40) we obtain…”
Section: ) If and Only If There Exists Anmentioning
confidence: 99%
“…In particular, we stress very clearly that well-known approaches as in [9,14,35,36] are not directly applicable to this problem, because either they address additive problems or smooth convex minimizations, which is not the case of total variation minimization. We emphasize that the successful convergence of such alternating algorithms is far from being obvious for nonsmooth and nonadditive problems, as many counterexamples can be constructed, see for instance [38]. Moreover, for total variation minimization, the interesting solutions may be discontinuous, e.g., along curves in 2D.…”
Section: Difficulty Of the Problemmentioning
confidence: 99%
“…Associated with distributed solutions approaches arising from these two motivations are the Gauss-Seidel approaches and the Jacobi approaches. Gauss-Seidel decomposition/coordination techniques such as the alternating direction method of multipliers (ADMM) [29,24,4,17,36,9] and the block coordinate descent (BCD) method [58,45,6,31,5,4,30,56] perform one subproblem computation at a time based on the knowledge of the most up-to-date information passed from the other subproblems. The potential for parallelization of the solution computation (treated distinctly from distribution) depends on the nature of coupling among the subproblems.…”
Section: Distributed Versus Parallel Optimizationmentioning
confidence: 99%
“…This algorithm has a long history in applied mathematics and has its roots in the Gauss-Siedel method for solving linear systems (Warge (1963); Ortega and Rheinbold (1970); Tseng (2001)). The CDA optimizes an objective function by working on one coordinate (or a block of coordinates) at a time, iteratively cycling through all coordinates until convergence is reached.…”
Section: Introductionmentioning
confidence: 99%