2013
DOI: 10.1109/tit.2013.2280272
|View full text |Cite
|
Sign up to set email alerts
|

A Dirty Model for Multiple Sparse Regression

Abstract: Sparse linear regression -finding an unknown vector from linear measurements -is now known to be possible with fewer samples than variables, via methods like the LASSO. We consider the multiple sparse linear regression problem, where several related vectors -with partially shared support sets -have to be recovered. A natural question in this setting is whether one can use the sharing to further decrease the overall number of samples required. A line of recent research has studied the use of ℓ1/ℓq norm block-re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
76
0
2

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 64 publications
(78 citation statements)
references
References 21 publications
0
76
0
2
Order By: Relevance
“…For example, [13][14][15][34][35][36] assume that there are a set of features (either in original space or in a transformed space) sharing for all tasks. There are also some multi-task learning algorithms using sparse constraints, such as 1 norm constraint [11] , 2, 1 norm constraint [37] , trace norm constraint [15,36] , and the combination of them such as 1 + 1 ,q norm multi-task learning [16] , sparse and low-rank multi-task learning [13] , robust multi-task learning using group sparse and low rank constraints [38] , robust multi-task feature learning [39] .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [13][14][15][34][35][36] assume that there are a set of features (either in original space or in a transformed space) sharing for all tasks. There are also some multi-task learning algorithms using sparse constraints, such as 1 norm constraint [11] , 2, 1 norm constraint [37] , trace norm constraint [15,36] , and the combination of them such as 1 + 1 ,q norm multi-task learning [16] , sparse and low-rank multi-task learning [13] , robust multi-task learning using group sparse and low rank constraints [38] , robust multi-task feature learning [39] .…”
Section: Related Workmentioning
confidence: 99%
“…the user attribute concurrently, which can improve the generalization performance than standard multi-class classification problem. Most state-of-the-art supervised multi-task learning methods only adopt the limited labeled Y for model training, e.g., [10][11][12][13][14][15][16] . As we know, the unlabeled data, i.e., the missing labels data, also include useful information.…”
Section: Introductionmentioning
confidence: 99%
“…They then remove the outlier tasks and perform L 21 norm multi-task learning on the clean dataset composed of similar tasks. In Jalali et al 's work [34], the sum of two matrices are used to represent the parameters and these are regularized differently to learn both shared features and individual outliers for different tasks separately.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…Jalali et al, in [34], propose to decompose the model W into two components as P and Q, where one captures shared features among tasks while the other captures intrinsic properties that are useful in recognizing individual tasks.…”
Section: Dirty Multi-task Lassomentioning
confidence: 99%
“…Of these, the most directly related to our work are [40] and [27], which formulate the general problem of estimation in settings where the signal may be split into a superposition of different types θ = L =1 θ through the use of penalization of different types for each of the components of the superposition of the form L =1 penalty (θ ). Within the general framework, [40] and [27] proceed to focus their study on several leading cases in sparse estimation, emphasizing the interplay between group-wise sparsity and element-wise sparsity and considering problems in multi-task learning. By contrast, we propose and focus on another leading case, which emphasizes the interplay between sparsity and density in the context of regression learning.…”
mentioning
confidence: 99%