2004
DOI: 10.1109/tcsvt.2003.821984
|View full text |Cite
|
Sign up to set email alerts
|

A New Covariance Estimate for Bayesian Classifiers in Biometric Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2005
2005
2016
2016

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(60 citation statements)
references
References 20 publications
0
60
0
Order By: Relevance
“…4 SDNMF efficacy initially increases as we partition each class from 2 up to 5 subclasses, where our algorithm attained its best performance, while further partitioning classes results in reduced recognition accuracy. This is attributed to the fact that since training samples per subclass are limited subclass covariance matrices evaluated on few examples are poorly estimated which affects the correctness of the identified projection directions [34], [48].…”
Section: F Object Recognition On Eth-80 Datasetmentioning
confidence: 99%
“…4 SDNMF efficacy initially increases as we partition each class from 2 up to 5 subclasses, where our algorithm attained its best performance, while further partitioning classes results in reduced recognition accuracy. This is attributed to the fact that since training samples per subclass are limited subclass covariance matrices evaluated on few examples are poorly estimated which affects the correctness of the identified projection directions [34], [48].…”
Section: F Object Recognition On Eth-80 Datasetmentioning
confidence: 99%
“…We should also note that for the datasets whose number of training observations N is small compared to their dimensionality F (such as the Sheffield and ETH-80 datasets), the computation of the inverse of the MLE of the sample covariance matrix (16) by the EM-based methods, for instance SMDA and EM-MSDA, will be especially problematic (e.g. see [2], [53]). In these cases, we compute the inverse using the eigenvalue decomposition of the sample covariance matrix, keeping only the eigenvalue components whose eigenvalues are above a specific threshold [2].…”
Section: B Evaluationmentioning
confidence: 99%
“…Between FMSDA and EM-MSDA, we observe that the former tends to perform better when the data dimensionality is larger than the number of the samples, and at the same time many subclasses are necessary in order to capture the subclass structure of the data. In these cases, the training samples per subclass are limited and consequently the subclass covariance matrices are poorly estimated [53]. This adversely affects the performance of EM-based methods.…”
Section: B Evaluationmentioning
confidence: 99%
“…To avoid both critical issues, we have calculated w lda by using a maximum uncertainty LDA-based approach (MLDA) that considers the issue of stabilizing the S w estimate with a multiple of the identity matrix [26,25,27].…”
Section: Linear Discriminant Analysis (Lda)mentioning
confidence: 99%