2021
DOI: 10.3390/a14050143
|View full text |Cite
|
Sign up to set email alerts
|

Overrelaxed Sinkhorn–Knopp Algorithm for Regularized Optimal Transport

Abstract: This article describes a set of methods for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely used iterative Bregman projections algorithm (or Sinkhorn–Knopp algorithm). We first proposed to rely on regularized nonlinear acceleration schemes. In practice, such approaches lead to fast algorithms, but their global convergence is not ensured. Hence, we next proposed a new algorithm with convergence guarantees. The idea is to overrelax the Breg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…In this note we discuss a modified version of the Sinkhorn algorithm employing relaxation, which was recently proposed in [13] and [10]. It uses the update rule…”
Section: Introduction and Statement Of Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this note we discuss a modified version of the Sinkhorn algorithm employing relaxation, which was recently proposed in [13] and [10]. It uses the update rule…”
Section: Introduction and Statement Of Resultsmentioning
confidence: 99%
“…After a similarity transform, this requires to compute the spectral norm of a symmetric matrix. An even simpler approach, as suggested in [13], is to directly estimate ϑ 2 , and hence ω opt , by monitoring the convergence rate of the standard Sinkhorn method in terms of a suitable residual. In the final section 4 we include numerical illustrations, which indicate that in certain cases such heuristics can be quite precise already in the initial phase of the algorithm, resulting in the almost optimal convergence rate.…”
Section: Introduction and Statement Of Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The pioneering work of Cuturi [14] established a relation between entropy regularized optimal transport and matrix scaling of exp(−C/η), where exp denotes the elementwise exponential and η > 0 denotes the regularization parameter. Scaling the rows and columns of exp(−C/η) such that marginal constraints are satisfied can be achieved numerically using the Sinkhorn algorithm [44], whose convergence speed can be accelerated using greedy coordinate descent [2,32], overrelaxation [45] or accelerated gradient descent [16]. This matrix scaling approach allows one to solve much larger optimal transport problems compared to previous attempts based on solving the original linear program, which in turn has impacted various fields including image processing [19,39], data science [38], engineering [34] and machine learning [22,30].…”
Section: Introductionmentioning
confidence: 99%
“…The introduction of this entropic regularization improves the scalability of OT, but involves a spreading of the mass and a loss of sparsity in the OT plan. When a sparse transport plan is sought, the convergence is slowed down, necessitating the use of acceleration strategies (Thibault et al, 2021). Regarding UOT with the (squared) 2 norm, Blondel et al (2018) showed that the resulting OT plan is sparse and proposed to use an efficient L-BFGS-B algorithm (Byrd et al, 1995) to address this case.…”
Section: Introductionmentioning
confidence: 99%