2018
DOI: 10.1109/tnnls.2017.2740224
|View full text |Cite
|
Sign up to set email alerts
|

Jointly Learning Structured Analysis Discriminative Dictionary and Analysis Multiclass Classifier

Abstract: In this paper, we propose an analysis mechanism-based structured analysis discriminative dictionary learning analysis discriminative dictionary learning, framework. The ADDL seamlessly integrates ADDL, analysis representation, and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learned dictionaries, representations, and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
87
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 145 publications
(91 citation statements)
references
References 40 publications
0
87
0
Order By: Relevance
“…For larger datasets such as MNIST and Fashion-MNIST, the loss is non-increasing in iterations and finally converges. [5] 90.30% 91.58% DKSVD [7] 92.31% 92.92% LC-KSVD1 [1] 93.67% 95.01% LC-KSVD2 [1] 94.49% 95.91% DLSI [9] 95.48% 96.13% FDDL [8] 95.38% 96.00% DPL [10] 94.20% 95.00% LRSDL [19] 96.26% 96.85% ADDL [12] 95.90% 96.30% SCN-2 [6] 96.37% 97.53% CDPL-Net (no DPL layers) 96.20% 97.44% Our CDPL-Net 97.70% 98.48%…”
Section: Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…For larger datasets such as MNIST and Fashion-MNIST, the loss is non-increasing in iterations and finally converges. [5] 90.30% 91.58% DKSVD [7] 92.31% 92.92% LC-KSVD1 [1] 93.67% 95.01% LC-KSVD2 [1] 94.49% 95.91% DLSI [9] 95.48% 96.13% FDDL [8] 95.38% 96.00% DPL [10] 94.20% 95.00% LRSDL [19] 96.26% 96.85% ADDL [12] 95.90% 96.30% SCN-2 [6] 96.37% 97.53% CDPL-Net (no DPL layers) 96.20% 97.44% Our CDPL-Net 97.70% 98.48%…”
Section: Convergence Analysismentioning
confidence: 99%
“…DL obtains the sparse representation of samples via a linear combination of atoms in a dictionary. Classical DL algorithms include K-Singular Value Decomposition (K-SVD) [4], Discriminative K-SVD (D-KSVD) [7], Label-Consistent K-SVD (LC-KSVD) [1], Fisher Discrimination Dictionary Learning (FDDL) [8], Dictionary Learning with Structured Incoherence (DLSI) [9], Structured Analysis Discriminative Dictionary Learning (ADDL) [12], Projective Dictionary Pair Learning (DPL) [10] and Low-rank Shared Dictionary Learning (LRSDL) [19], etc. Compared with the other existing methods, both DPL and ADDL extend the regular DL into the dictionary pair learning, i.e., learning a synthesis dictionary and an analysis dictionary jointly to analytically code data.…”
Section: Introductionmentioning
confidence: 99%
“…However, CS theory states that if x j is sufficiently sparse in a transform (Ψ Ψ Ψ) domain, exact recovery is possible. Several recovery methods have been developed for this purpose [34], [35], [36], [37]. Specifically, if the transform coefficients, v j = Ψ Ψ Ψx j , are sufficiently sparse, the solution of the recovery procedure can be found with several l 0 optimization procedures or their l 1 -based convex relaxations that use pursuit-based methods [15], [16].…”
Section: Introductionmentioning
confidence: 99%
“…The second .. category aims at learning the category-specific dictionaries to improve the discrimination by assigning each sub-dictionary to a single subject class, i.e., structured dictionary learning. In this category, popular methods are Low-rank Shared DL (LRSDL) [52], Analysis Discriminative Dictionary Learning (ADDL) [22], Dictionary Learning with Structured Incoherence (DLSI) [21] and Projective Dictionary Pair Learning (DPL) [17]. DPL aims to obtain a structured synthesis dictionary and an analysis dictionary jointly, and ADDL is based on the idea of DPL and it aims to compute a structured analysis discriminative dictionary and an analysis multiclass classifier.…”
Section: Introductionmentioning
confidence: 99%