Abstract-Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present article lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significance maps and give rise to structured thresholding. These generalize Group Lasso and the previously introduced Elitist Lasso by introducing more flexibility in the coefficient domain modeling, and lead to the notion of social sparsity. The proposed operators are studied theoretically and embedded in iterative thresholding algorithms. Moreover, a link between these operators and a convex functional is established. Numerical studies on both simulated and real signals confirm the benefits of such an approach.Index Terms-Structured Sparsity, Iterative Thresholding, Convex Optimization
I. INTRODUCTIONA wide range of inverse problems arising in signal processing have benefited from sparsity. Introduced in the mid 90's by Chen, Donoho and Saunders [1], the idea is that a signal can be efficiently represented as a linear combination of elementary atoms chosen from an appropriate dictionary. Here, efficiently may be understood in the sense that only few atoms are needed to reconstruct the signal. The same idea appeared in the machine learning community [2], where often only few variables are relevant in inference tasks based on observations living in very high dimensional spaces.The natural measure of the cardinality of a support set, and hence its sparsity, is the 0 "norm" which counts the number of non-zero coefficients. Minimizing such a penalty leads to a combinatorial problem which is usually relaxed into a 1 norm which is convex.Solving an inverse problem by using the sparse principle can be done by the following steps:• Choose a dictionary where the signal of interest is supposed to be sparse. Such a choice is driven by the nature