2014
DOI: 10.1109/tip.2014.2331760
|View full text |Cite
|
Sign up to set email alerts
|

Learning Discriminative Dictionary for Group Sparse Representation

Abstract: In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
42
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 96 publications
(45 citation statements)
references
References 35 publications
(83 reference statements)
0
42
0
Order By: Relevance
“…In the previous researches, the parametric feature model P(X|F, θ P )P(F) will be characterized by the above regularized linear synthesis model with θ P = D and a regularization term on F, e.g., sparsity inducing 1 norm [15]- [17], [26], square 2 norm [27], group sparsity inducing 1,2 norm [28], low rank inducing nuclear norm [29], their combinations [30], hierarchical prior [19], elastic net and Fisher term [3], [18]. Although these different regularizations can all compute the MAP solutions of a discriminative dictionary and features to promote the classification performance, this feature model will suffer from the following intrinsic problems.…”
Section: B Motivationmentioning
confidence: 99%
“…In the previous researches, the parametric feature model P(X|F, θ P )P(F) will be characterized by the above regularized linear synthesis model with θ P = D and a regularization term on F, e.g., sparsity inducing 1 norm [15]- [17], [26], square 2 norm [27], group sparsity inducing 1,2 norm [28], low rank inducing nuclear norm [29], their combinations [30], hierarchical prior [19], elastic net and Fisher term [3], [18]. Although these different regularizations can all compute the MAP solutions of a discriminative dictionary and features to promote the classification performance, this feature model will suffer from the following intrinsic problems.…”
Section: B Motivationmentioning
confidence: 99%
“…The class specific dictionary learning method trained a dictionary for each class of samples. Sun et al [23] learned a class specific subdictionary for each class and a common subdictionary shared by all classes to improve the classification performance. Wang and Kong [24] proposed a method to explicitly learn a class specific dictionary for each category, which captures the most discriminative features of this category, and simultaneously learn a common pattern pool, whose atoms are shared by all the categories and only contribute to representation of the data rather than discrimination.…”
Section: Introductionmentioning
confidence: 99%
“…Sparse coding is to find a certain small number of base atoms from the dictionary for reconstruction raw data. Sparse coding can be solved by Orthogonal Matching Pursuit (OMP) [14] , LASSO (least absolute shrinkage and selection operator) [13], or the gradient descent algorithm [15]. The dictionary can generally come from two sources: mathematical model-based methods and the dictionary learning from training data.…”
Section: Introductionmentioning
confidence: 99%