2021
DOI: 10.48550/arxiv.2109.13566
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the rate of convergence of the Difference-of-Convex Algorithm (DCA)

Abstract: In this paper, we study the convergence rate of the DCA (Difference-of-Convex Algorithm), also known as the convex-concave procedure. The DCA is a popular algorithm for difference-of-convex (DC) problems, and known to converge to a stationary point under some assumptions. We derive a worst-case convergence rate of O(1/ √ N ) after N iterations of the objective gradient norm for certain classes of unconstrained DC problems. For constrained DC problems with convex feasible sets, we obtain a O(1/N ) convergence r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 38 publications
(57 reference statements)
0
10
0
Order By: Relevance
“…The actively studied numerical methods for solving a DC program include DCA An, 1998, 1997;An and Tao, 2005;Souza et al, 2016), which is also known as the concave-convex procedure (Yuille and Rangarajan, 2003;Sriperumbudur and Lanckriet, 2009;Lipp and Boyd, 2016), the proximal DCA (Sun et al, 2003;Moudafi and Maingé, 2006;Moudafi, 2008;An and Nam, 2017), and the direct gradient methods (Khamaru and Wainwright, 2018). However, when the two convex compo-nents are both non-smooth, the existing methods have only asymptotic convergence results except the method by Abbaszadehpeivasti et al (2021), who considered a stopping criterion different from ours. When at least one component is smooth, non-asymptotic convergence rates have been established with and without the Kurdyka-Lojasiewicz (KL) condition (Souza et al, 2016;Artacho et al, 2018;Wen et al, 2018;An and Nam, 2017;Khamaru and Wainwright, 2018).…”
Section: Related Workmentioning
confidence: 94%
“…The actively studied numerical methods for solving a DC program include DCA An, 1998, 1997;An and Tao, 2005;Souza et al, 2016), which is also known as the concave-convex procedure (Yuille and Rangarajan, 2003;Sriperumbudur and Lanckriet, 2009;Lipp and Boyd, 2016), the proximal DCA (Sun et al, 2003;Moudafi and Maingé, 2006;Moudafi, 2008;An and Nam, 2017), and the direct gradient methods (Khamaru and Wainwright, 2018). However, when the two convex compo-nents are both non-smooth, the existing methods have only asymptotic convergence results except the method by Abbaszadehpeivasti et al (2021), who considered a stopping criterion different from ours. When at least one component is smooth, non-asymptotic convergence rates have been established with and without the Kurdyka-Lojasiewicz (KL) condition (Souza et al, 2016;Artacho et al, 2018;Wen et al, 2018;An and Nam, 2017;Khamaru and Wainwright, 2018).…”
Section: Related Workmentioning
confidence: 94%
“…Note that our objective function is non-convex, since the within-group penalty p w in ( 7) is not convex and the loss function in (3) involves minimization. To tackle this challenge, we decompose our objective function as a difference of two convex functions and solve the optimization problem based on the difference of convex (DC) algorithm (Le Thi Hoai and Tao, 1997; Shen et al, 2012), which is shown to converge to a stationary point under regularity conditions (Abbaszadehpeivasti et al, 2021). Also, we use a smooth approximation of the between-group fused Lasso penalty.…”
Section: Methodsmentioning
confidence: 99%
“…We assume that strong duality holds for problem (1), that is max λ∈R r D(λ) = min Ax+Bz=b f (x) + g(z).…”
Section: Algorithm 1 Admmmentioning
confidence: 99%
“…Problem (1) appears naturally (or after variable splitting) in many applications in statistics, machine learning and image processing to name but a few [5,19,24,34]. The most common method for solving problem (1) is the alternating direction method of multipliers (ADMM).…”
Section: Introductionmentioning
confidence: 99%