“…A variety of sparsity-inducing penalty functions have been proposed to approximate the ℓ 0 term: exponential concave function [4], ℓ p -norm with 0 < p < 1 [15] and p < 0 [71], Smoothly Clipped Absolute Deviation (SCAD) [13], Logarithmic function [82], Capped-ℓ 1 [26] (see (21), (22) and Table 1 in Section 3 for the definition of these functions). Using these approximations, several algorithms have been developed for resulting optimization problems, most of them are in the context of feature selection in classification, sparse regressions or more especially for sparse signal recovery: Successive Linear Approximation (SLA) algorithm [4], DCA (Difference of Convex functions Algorithm) based algorithms [11,12,16,21,28,42,43,51,54,63,65], Local Linear Approximation (LLA) [87], Two-stage ℓ 1 [83], Adaptive Lasso [86], reweighted-ℓ 1 algorithms [8]), reweighted-ℓ 2 algorithms such as Focal Underdetermined System Solver (FOCUSS) ( [18,71,72]), Iteratively reweighted least squares (IRLS) and Local Quadratic Approximation (LQA) algorithm [13,87].…”