2016
DOI: 10.1016/j.patcog.2016.01.028
|View full text |Cite
|
Sign up to set email alerts
|

Supervised dictionary learning with multiple classifier integration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(10 citation statements)
references
References 61 publications
0
10
0
Order By: Relevance
“…The classification accuracies of all methods are summarized in Table 3. Besides the algorithms already mentioned in the above experiments, we also add five state-of-the-art algorithms for comparison, which are EasyDL [45], MCDL [46], IC-DDL [47], SADL [48] and K-LSDSR [49]. We find that the analysis kernel KSVD algorithm can achieve the 96.8% classification accuracy, which shows that our method is competitive among all compared methods.…”
Section: E-yaleb Datasetmentioning
confidence: 89%
“…The classification accuracies of all methods are summarized in Table 3. Besides the algorithms already mentioned in the above experiments, we also add five state-of-the-art algorithms for comparison, which are EasyDL [45], MCDL [46], IC-DDL [47], SADL [48] and K-LSDSR [49]. We find that the analysis kernel KSVD algorithm can achieve the 96.8% classification accuracy, which shows that our method is competitive among all compared methods.…”
Section: E-yaleb Datasetmentioning
confidence: 89%
“…Typical discriminative sparse coding algorithms such as [4,5,9] focus on optimization problems generally similar to that of [4] as…”
Section: Discriminative Sparse Codingmentioning
confidence: 99%
“…In order to preserve the consistency between the test and training model, we also add the term β UΓ 2 F to the discriminant loss F of (6) which results in (9). Doing so, we want to make sure the trained dictionary U has a proper structure also regarding (5).…”
Section: Discriminative Recall Term G(l γ U)mentioning
confidence: 99%
“…An equiangular kernel dictionary is proposed in [29] to exploit the sparsity of high-dimensional visual data. In [30], [31], a dictionary and multiple classifiers are jointly learned to enhance the discriminative ability of the sparse codes. To capture the non-linear properties of data efficiently, a dictionary and a decision tree classifier are learned in [32].…”
Section: Introductionmentioning
confidence: 99%