“…Problem ǫ-optimality measure Convergence rate Algorithms for Convex Models PGM [16] (4) objective value error O(1/k) APGM [16] (4) objective value error O(1/k 2 ) IALM [17] (2) -convergence, no rate given ADMM [18] (2) -convergence, no rate given ALM [19] (17) objective value error O(1/k) FALM [19] (17) objective value error O(1/k 2 ) ASALM [20] (19) -convergence unclear, no rate given VASALM [20] (19) -convergence, no rate given PSPG [21] (3) objective value error O(1/k) ADMIP [22] (3) objective value error O(1/k) Quasi-Newton method (fastRPCA) [23] (26) -convergence, no rate given 3-block ADMM [24] (28) -convergence, no rate given Frank-Wolfe [25] (30) objective value error O(1/k) Algorithms for Nonconvex Models GoDec [26] (33) -local convergence, no rate given GreBsmo [27] (36) -convergence unclear, no rate given Alternating Minimization (R2PCP) [28] (35) -local convergence, no rate given Gradient Descent (GD) [11] ≈ (36) -linear convergence with proper initialization and incoherence assumption Alternating Minimization [29] (37) -local convergence with proper initialization and incoherence and RIP assumptions Stochastic alg. [30] (39) -convergence if the iterates are always full rank matrices, no rate given LMafit [31] (44) -convergence if difference between two consecutive iterates tends to zero, no rate given Conditional Gradient [32] (48) [32] (54) perturbed KKT O(1/ √ k) † Note: Some of these algorithms solve different problems and the ǫ-optimality measures are also different, so the convergence rates are not directly comparable with each other. [11] has no explicit optimization formulation but the objective is similar to (36).…”