2019
DOI: 10.1007/s41060-019-00201-4
|View full text |Cite
|
Sign up to set email alerts
|

Improved method for correcting sample Mahalanobis distance without estimating population eigenvalues or eigenvectors of covariance matrix

Abstract: The recognition performance of the sample Mahalanobis distance (SMD) deteriorates as the number of learning samples decreases. Therefore, it is important to correct the SMD for a population Mahalanobis distance (PMD) such that it becomes equivalent to the case of infinite learning samples. In order to reduce the computation time and cost for this main purpose, this paper presents a correction method that does not require the estimation of the population eigenvalues or eigenvectors of the covariance matrix. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…This work also illustrates the causal effect of mortality rate in poverty traps and amortization of physical and human capital. Prior work [19] presents a correction method for the sample Mahalanobis distance from its population by using sample of Eigen values of covariance matrix to correct the sample distance and tests this algorithm for Gaussian mixture models using expectation maximization algorithm.…”
Section: Literature Surveymentioning
confidence: 99%
See 1 more Smart Citation
“…This work also illustrates the causal effect of mortality rate in poverty traps and amortization of physical and human capital. Prior work [19] presents a correction method for the sample Mahalanobis distance from its population by using sample of Eigen values of covariance matrix to correct the sample distance and tests this algorithm for Gaussian mixture models using expectation maximization algorithm.…”
Section: Literature Surveymentioning
confidence: 99%
“…Lower bound on log likelihood is achieved by calculating free energy which is estimated by using KL-divergence as a limiting parameter. The free energy is given by (19) where q(s) is the cluster assignments. Due to the nonnegativity of KL divergence value Free energy becomes (20) The EM algorithm determines the optimal parameter by iterating with two steps for optimizing and separately to configure optimal energy likelihood value.…”
Section: Sample Data Value Given Energy Membership Is Represented Asmentioning
confidence: 99%