2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00758
|View full text |Cite
|
Sign up to set email alerts
|

Relative Interior Rule in Block-Coordinate Descent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Similar to other dual block coordinate ascent schemes Algorithm 1 can get stuck in suboptimal points, see [58,59]. As seen in experiments these are usually not far away from the optimum, however.…”
Section: Parallel Deferred Min-marginal Averagingmentioning
confidence: 72%
See 1 more Smart Citation
“…Similar to other dual block coordinate ascent schemes Algorithm 1 can get stuck in suboptimal points, see [58,59]. As seen in experiments these are usually not far away from the optimum, however.…”
Section: Parallel Deferred Min-marginal Averagingmentioning
confidence: 72%
“…The current min-marginal difference is subtracted and the one from previous iteration is added (line 10) by distributing it equally among subproblems J i . Following [59] we use a damping factor ω ∈ (0, 1) (0.5 in our experiments) to obtain better final solutions. Proposition 1.…”
Section: Parallel Deferred Min-marginal Averagingmentioning
confidence: 99%
“…However, the construction of the decomposition into combinatorial subproblems, the specific choice of updates and their efficient implementation are still left open for the algorithm designer to decide anew for each new problem class. The work [47] analyzes different update operations for DBCA problems and theoretically characterizes the resulting stationary points.…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…It is well-known that, except in special cases [13], DBCA can fail to reach the optimum of the relaxation. Suboptimal stationary points of DBCA algorithms are analyzed in [47]. One way to attain optima of Lagrangean relaxations with DBCA algorithms is to replace the original non-smooth dual objective with a smooth approximation on which DBCA is guaranteed to find the optimum.…”
Section: Smoothingmentioning
confidence: 99%
“…This is a motivation to look for other classes of convex optimization problems for which (block-)coordinate descent would work well or, alternatively, to extend convergent message passing methods to a wider class of convex problems than the dual LP relaxation of MAP inference. A step in this direction is the work [30], where it was observed that if the minimizer of the problem over the current variable block is not unique, one should choose a minimizer that lies in the relative interior of the set of block-optimizers. It is shown that any update satisfying this rule is, in a precise sense, not worse than any other exact update.…”
Section: Introductionmentioning
confidence: 99%