2020
DOI: 10.1016/j.sciaf.2019.e00249
|View full text |Cite
|
Sign up to set email alerts
|

Kernel based locality – Sensitive discriminative sparse representation for face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…The classification accuracies of all methods are summarized in Table 3. Besides the algorithms already mentioned in the above experiments, we also add five state-of-the-art algorithms for comparison, which are EasyDL [45], MCDL [46], IC-DDL [47], SADL [48] and K-LSDSR [49]. We find that the analysis kernel KSVD algorithm can achieve the 96.8% classification accuracy, which shows that our method is competitive among all compared methods.…”
Section: E-yaleb Datasetmentioning
confidence: 89%
See 1 more Smart Citation
“…The classification accuracies of all methods are summarized in Table 3. Besides the algorithms already mentioned in the above experiments, we also add five state-of-the-art algorithms for comparison, which are EasyDL [45], MCDL [46], IC-DDL [47], SADL [48] and K-LSDSR [49]. We find that the analysis kernel KSVD algorithm can achieve the 96.8% classification accuracy, which shows that our method is competitive among all compared methods.…”
Section: E-yaleb Datasetmentioning
confidence: 89%
“…Just like the above mentioned two datasets, we also use the analysis KSVD algorithm for the classification experiment on [39] 91.9% KSVD 93.1% LC-KSVD [41] 94.5% EasyDL [45] 96.2% MCDL [46] 95.8% IC-DDL [47] 95.6% SADL [48] 94.9% kernel PCA 88.1% kernel KSVD 91.8% LKDL [41] 96.1% K-LSDSR [49] 96.8% analysis kernel KSVD 96.8%…”
Section: E-yaleb Datasetmentioning
confidence: 99%
“…Applied Computational Intelligence and Soft Computing Local nonlinear multilayer contrast patterns (LNLMCP) 97.50 YaleB [35] Discriminative sparse representation via l2 regularization 82.61 YaleB [32] GLRAM 97.25 AT&T [33] Fisher discriminative dictionary learning (FDDL) 96.7 AT&T [31] PSO-KNN 98.75 AT&T [31] PCA-LDA fusion algorithm 98.00 AT&T [35] Discriminative sparse representation via l2 regularization 95.00 AT&T PSO + EDU and PSO + KNN trials. Subsequently, the ABC and GA meta-heuristic algorithms produced a similar result to PSO, but PSO is computationally less expensive than both.…”
Section: Discussionmentioning
confidence: 99%
“…The sparse approximation based methods are outperforming existing techniques constrained to limited training data in terms of classification accuracy and easier implementation [14, 15, 16]. In the last decade, many sparse approximation based methods have evolved that performs very well [17, 18, 19]. In these methods, the test vectors are approximated to linear sparse combination of training vectors and final matching contribution is calculated for further classification of test vector.…”
Section: Literature Surveymentioning
confidence: 99%