2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
DOI: 10.1109/icassp.2015.7178697
|View full text |Cite
|
Sign up to set email alerts
|

A sequential dictionary learning algorithm with enforced sparsity

Abstract: Dictionary learning algorithms have received widespread ac ceptance when it comes to data analysis and signal represen tations problems. These algorithms alternate between two stages: the sparse coding stage and dictionary update stage. In all existing dictionary learning algorithms the use of spar sity has been limited to the sparse coding stage while pre senting differences in the dictionary update stage which can be achieved sequentially or in parallel. The singular value decomposition (SVD) has been succes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
17
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 25 publications
0
17
0
Order By: Relevance
“…In particular, the function used in the fidelity term belongs to the class of redescending M-estimators which guarantee stability of inference for relatively large deviations from the nominal noise model [24]. The approach adopted for deriving the proposed algorithm exploits the observation that the observed data matrix can be approximated as a sum of rank-1 matrix approximation [25]- [27]. The proposed algorithm is obtained via adaptive sequential penalized rank-1 matrix approximation where a block coordinate descent approach is used to determine the unknowns of the different rank-1 approximation matrices.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…In particular, the function used in the fidelity term belongs to the class of redescending M-estimators which guarantee stability of inference for relatively large deviations from the nominal noise model [24]. The approach adopted for deriving the proposed algorithm exploits the observation that the observed data matrix can be approximated as a sum of rank-1 matrix approximation [25]- [27]. The proposed algorithm is obtained via adaptive sequential penalized rank-1 matrix approximation where a block coordinate descent approach is used to determine the unknowns of the different rank-1 approximation matrices.…”
Section: Introductionmentioning
confidence: 99%
“…where d j and x i denote the j th column of the dictionary D and i th column of the sparse code matrix X. Variants of (1) have also been used to design DL algorithms, among these, one can cite methods that replace the 0 −norm sparsity constraint by an 1 −norm penalty for sparsity [25], [26], [28], [30]- [33] or includes additional properties on the dictionary such as incoherence [34], [35].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Some of these algorithms (e.g., [30], [38], [40]) also partially update X in the dictionary update step. A few recent methods update D and X jointly in an iterative fashion [42], [43].…”
Section: Introductionmentioning
confidence: 99%