2017
DOI: 10.1007/s00440-017-0778-9
|View full text |Cite
|
Sign up to set email alerts
|

On the convergence of the extremal eigenvalues of empirical covariance matrices with dependence

Abstract: International audienceConsider a sample of a centered random vector with unit covariance matrix. We show that under certain regularity assumptions, and up to a natural scaling, the smallest and the largest eigenvalues of the empirical covariance matrix converge, when the dimension and the sample size both tend to infinity, to the left and right edges of the Marchenko--Pastur distribution. The assumptions are related to tails of norms of orthogonal projections. They cover isotropic log-concave random vectors as… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
18
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 45 publications
0
18
0
Order By: Relevance
“…While the resolvent method has been successful in establishing many properties of Wigner and covariance matrices (see e.g. [50,30,55,26,45] as well as the recent work of [52,19] in a nonindependent setting), the model (1) for nonlinear kernels does not have the same independence structure as these models, and it is also not a sum of rank-one updates. These difficulties were overcome in [20] via Gaussian conditioning arguments, but strengthening the bounds of [20] to yield finer control of the Stieltjes transform m(z) near the real axis does not seem (in our viewpoint) more straightforward than our moment-based approach.…”
Section: Introductionmentioning
confidence: 99%
“…While the resolvent method has been successful in establishing many properties of Wigner and covariance matrices (see e.g. [50,30,55,26,45] as well as the recent work of [52,19] in a nonindependent setting), the model (1) for nonlinear kernels does not have the same independence structure as these models, and it is also not a sum of rank-one updates. These difficulties were overcome in [20] via Gaussian conditioning arguments, but strengthening the bounds of [20] to yield finer control of the Stieltjes transform m(z) near the real axis does not seem (in our viewpoint) more straightforward than our moment-based approach.…”
Section: Introductionmentioning
confidence: 99%
“…Let us refer to the classical works [33] and [3] regarding almost sure convergence of appropriately normalized singular values when the coordinates of the underlying distributions are i.i.d. ; as well as more recent works [22,14,18,25,9,1,2,29,19,26,20,17,28,11,10,31,32,21,27,8]. For a more comprehensive list of results, we refer to surveys [24] and [30].…”
Section: Introductionmentioning
confidence: 99%
“…Sketch of the proof. The identical intensities assumption allows us to use the result of Chafaï and Tikhomirov (2018) for matrices with i.i.d. columns.…”
Section: Mathematical Formulationmentioning
confidence: 99%
“…Proof of Theorem 4. Let us write the result of Chafaï and Tikhomirov (2018) adapted to our complex case and adapt the dimension notation (n → p(n), m n → n, but we keep the notation X n ). We additionally checked in all proofs and lemmas that the result still holds when we replace symmetric matrices by Hermitian ones and the scalar product of real vectors by Hermitian products of complex vectors, putting an absolute value on the Hermitian product when the original scalar product was squared.…”
mentioning
confidence: 99%