“…The inequality (1.6) implies that L 1 may not perform well for highly coherent matrices, i.e., µ(A) ∼ 1, as x 0 is then at most one, which seldom occurs simultaneously with Ax * = b. Other than the popular L 1 norm, there are a variety of regularization functionals to promote sparsity, such as L p [9,43,23], L 1 -L 2 [44,26], capped L 1 (CL1) [48,37], and transformed L 1 (TL1) [29,46,47]. Most of these models are nonconvex, leading to difficulties in proving exact recovery guarantees and algorithmic convergence, but they tend to give better empirical results compared to the convex L 1 approach.…”