2012
DOI: 10.5539/ijsp.v1n2p113
|View full text |Cite
|
Sign up to set email alerts
|

A Short Note on Resolving Singularity Problems in Covariance Matrices

Abstract: In problems where a distribution is concentrated in a lower-dimensional subspace, the covariance matrix faces a singularity problem. In downstream statistical analyzes this can cause a problem as the inverse of the covariance matrix is often required in the likelihood. There are several methods to overcome this challenge. The most wellknown ones are the eigenvalue, singular value, and Cholesky decompositions. In this short note, we develop a new method to deal with the singularity problem while preserving the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 11 publications
(15 reference statements)
0
3
0
Order By: Relevance
“…A multivariate Gaussian distribution was achieved via this procedure, but any preexisting interelectrode dependencies were likely weakened and thus the estimates of I(X) and C I (X) were likely distorted to some degree. Although it is possible to use Cholesky decomposition to implement a Gaussian transformation of multivariate data that preserves data covariance, this technique may fail when the covariance matrix is nearly singular due to high interdependencies among variables [ 95 ], such as is often observed for EEG signals.)…”
Section: Discussionmentioning
confidence: 99%
“…A multivariate Gaussian distribution was achieved via this procedure, but any preexisting interelectrode dependencies were likely weakened and thus the estimates of I(X) and C I (X) were likely distorted to some degree. Although it is possible to use Cholesky decomposition to implement a Gaussian transformation of multivariate data that preserves data covariance, this technique may fail when the covariance matrix is nearly singular due to high interdependencies among variables [ 95 ], such as is often observed for EEG signals.)…”
Section: Discussionmentioning
confidence: 99%
“…According to the fact that during scalable clustering, we use the Mahalanobis distance measure to assign each object to a minicluster, and besides, the size of temporary clusters is much smaller than that of original clusters, it would be worth mentioning an important matter here. Concerning [41,[70][71][72], in the case of high-dimensional data, classical approaches based on the Mahalanobis distance are usually not applicable. Because when the cardinality of a cluster is less than or equal to its dimensionality, the sample covariance matrix will become singular and not invertible [73]; hence, the corresponding Mahalanobis distance will no longer be reliable.…”
Section: Remarksmentioning
confidence: 99%
“…An issue with the implementation of these formulae lies in the estimation of within‐ and between‐sample variance–covariance matrices. It is well‐known, in statistics, that the (standard) maximum likelihood estimates of the variance–covariance matrices become singular with probability almost equal to 1 as the dimension of the data increases 9,10 . A singular or non‐invertible matrix can cause problems in a number of ways.…”
Section: Introductionmentioning
confidence: 99%