2014
DOI: 10.1016/j.jspi.2013.09.005
|View full text |Cite
|
Sign up to set email alerts
|

Random matrix theory in statistics: A review

Abstract: b s t r a c tWe give an overview of random matrix theory (RMT) with the objective of highlighting the results and concepts that have a growing impact in the formulation and inference of statistical models and methodologies. This paper focuses on a number of application areas especially within the field of high-dimensional statistics and describes how the development of the theory and practice in high-dimensional statistical inference has been influenced by the corresponding developments in the field of RMT.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
115
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 163 publications
(122 citation statements)
references
References 257 publications
3
115
0
Order By: Relevance
“…Finally, the distribution of eigenvalues uncovered by the extreme high-dimension G 50 850 and G 50 8750 completed matrices displayed spectral distributions consistent with the behavior of spiked covariance models of high-dimensional covariance matrices (Paul and Aue 2014). The term spiked refers to a small number of dimensions that have large eigenvalues, while the vast majority of dimensions have eigenvalues equal in value to some arbitrary small number.…”
Section: The Distribution Of Genetic Variance In High Dimensionssupporting
confidence: 52%
See 1 more Smart Citation
“…Finally, the distribution of eigenvalues uncovered by the extreme high-dimension G 50 850 and G 50 8750 completed matrices displayed spectral distributions consistent with the behavior of spiked covariance models of high-dimensional covariance matrices (Paul and Aue 2014). The term spiked refers to a small number of dimensions that have large eigenvalues, while the vast majority of dimensions have eigenvalues equal in value to some arbitrary small number.…”
Section: The Distribution Of Genetic Variance In High Dimensionssupporting
confidence: 52%
“…Disciplines as diverse as ecology, mathematical physics, signal processing, and finance struggle with the problem of estimating large covariance matrices (Paul and Aue 2014), although in most cases they tend to be sample covariance matrices rather than the more derived variance component matrices that represent G. The use of covariance matrices to describe the distribution of variance in high dimensions rests on the additional assumption of multivariate normality (MVN), which also underlies much of quantitative genetic theory and the multivariate response to selection (Lande 1979). Substantial deviation from the MVN assumption can potentially obscure the distribution of variance across trait combinations (see fig.…”
Section: Introductionmentioning
confidence: 99%
“…This effect is generally obtained in high-dimensional data and becomes more severe when c becomes smaller. [17][18][19][20] When c is not much larger than 1, the sample estimate is numerically ill conditioned, ie, inverting it to estimate the precision matrix will amplify estimation error. Additionally, when c < 1, matrix S loses full rank.…”
Section: The Sample Covariance Estimator Is a Poor Estimator For Himentioning
confidence: 99%
“…Several algorithms have been proposed to solve the problem Equation 19. To date, the most popular approach is the graphical lasso (glasso), where a solution to (Equation 19) is found by solving a series of coupled regression problems in an iterative fashion.…”
Section: The Graphical Lassomentioning
confidence: 99%
“…The conventional ITC-like methods have been developed for the large-scale sensor array in the framework of random matrix theory, which considers the asymptotic condition , → ∞ with / → ∈ (0, ∞) and provides more accurate descriptions for the sample eigenvalues of the high dimensional observations [6]. B. Nadler modified the Akaike information criterion (AIC) via increasing the penalty term based on the probability distribution of the largest sample eigenvalue [7].…”
Section: Introductionmentioning
confidence: 99%