In many practical situations we would like to estimate the covariance matrix of a set of variables from an insufficient amount of data. More specifically, if we have a set of N independent, identically distributed measurements of an M dimensional random vector the maximum likelihood estimate is the sample covariance matrix. Here we consider the case where N < M such that this estimate is singular (non-invertible) and therefore fundamentally bad. We present a radically new approach to deal with this situation. Let X be the M × N data matrix, where the columns are the N independent realizations of the random vector with covariance matrix Σ. Without loss of generality, and for simplicity, we can assume that the random variables have zero mean. We would like to estimate Σ from X. Let K be the classical sample covariance matrix. Fix a parameter 1 ≤ L ≤ N and consider an ensemble of L × M random unitary matrices, {Φ}, having Haar probability measure (isotropically random). Pre-and post-multiply K by Φ, and by the conjugate transpose of Φ respectively, to produce a nonsingular L × L reduced dimension covariance estimate. A new estimate for Σ, denoted by covL(K), is obtained by a) projecting the reduced covariance estimate out (to M × M ) through preand post-multiplication by the conjugate transpose of Φ, and by Φ respectively, and b) taking the expectation over the unitary ensemble. Another new estimate (this time for Σ −1 ), invcovL(K), is obtained by a) inverting the reduced covariance estimate, b) projecting the inverse out (to M × M ) through pre-and post-multiplication by the conjugate transpose of Φ, and by Φ respectively, and c) taking the expectation over the unitary ensemble. We show that the estimate cov is equivalent to diagonal loading. Both estimates invcov and cov retain the original eigenvectors and make nonzero the formerly zero eigenvalues. We have a closed form analytical expression for invcov in terms of its eigenvector and eigenvalue decomposition. We motivate the use of invcov through applications to linear estimation, supervised learning, and high-resolution spectral estimation. We also compare the performance of the estimator invcov with respect to diagonal loading.
This paper centers on the limit eigenvalue distribution for random Vandermonde matrices with unit magnitude complex entries. The phases of the entries are chosen independently and identically distributed from the interval [−π, π]. Various types of distribution for the phase are considered and we establish the existence of the empirical eigenvalue distribution in the large matrix limit on a wide range of cases. The rate of growth of the maximum eigenvalue is examined and shown to be no greater than O(log N ) and no slower than O(log N/ log log N ) where N is the dimension of the matrix. Additional results include the existence of the capacity of the Vandermonde channel (limit integral for the expected log determinant).
Let {T k } ∞ k=1 be a family of * -free identically distributed operators in a finite von Neumann algebra. In this work we prove a multiplicative version of the free central limit Theorem. More precisely, let Bn = T * 1 T * 2 . . . T * n Tn . . . T2T1 then Bn is a positive operator and B 1/2n n converges in distribution to an operator Λ. We completely determine the probability distribution ν of Λ from the distribution µ of |T | 2 . This gives us a natural map G : M+ → M+ with µ → G(µ) = ν. We study how this map behaves with respect to additive and multiplicative free convolution. As an interesting consequence of our results, we illustrate the relation between the probability distribution ν and the distribution of the Lyapunov exponents for the sequence {T k } ∞ k=1 introduced in [13].
In this work we study the asymptotic traffic flow in Gromov's hyperbolic graphs. We prove that under certain mild hypotheses the traffic flow in a hyperbolic graph tends to pass through a finite set of highly congested nodes. These nodes are called the "core" of the graph. We provide a formal definition of the core in a very general context and we study the properties of this set for several graphs.
In this work we prove that the giant component of the Erdös-Renyi random graph G(n, c/n) for c a constant greater than 1 (sparse regime), is not Gromov δ-hyperbolic for any δ with probability tending to one as n → ∞. As a corollary we provide an alternative proof that the giant component of G(n, c/n) when c > 1 has zero spectral gap almost surely as n → ∞.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.