2011
DOI: 10.1016/j.jmva.2010.08.007
|View full text |Cite
|
Sign up to set email alerts
|

Dimension estimation in sufficient dimension reduction: A unifying approach

Abstract: a b s t r a c tSufficient Dimension Reduction (SDR) in regression comprises the estimation of the dimension of the smallest (central) dimension reduction subspace and its basis elements. For SDR methods based on a kernel matrix, such as SIR and SAVE, the dimension estimation is equivalent to the estimation of the rank of a random matrix which is the sample based estimate of the kernel. A test for the rank of a random matrix amounts to testing how many of its eigen or singular values are equal to zero. We propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(58 citation statements)
references
References 32 publications
0
58
0
Order By: Relevance
“…., it can also be shown that their sample means are approximately normally distributed for large T (see Appendix B). Under the same assumptions we can then obtain that M h is asymptotically normal following similar arguments as Bura and Yang (2011) [17] who derived the asymptotic distribution of M when the data are i.i.d. draws from the joint distribution of (y, x).…”
Section: Propositionmentioning
confidence: 75%
See 1 more Smart Citation
“…., it can also be shown that their sample means are approximately normally distributed for large T (see Appendix B). Under the same assumptions we can then obtain that M h is asymptotically normal following similar arguments as Bura and Yang (2011) [17] who derived the asymptotic distribution of M when the data are i.i.d. draws from the joint distribution of (y, x).…”
Section: Propositionmentioning
confidence: 75%
“…It is based on the results of Section 3.1 and uses a sample counterpart to Σ −1 Var (E (x|y)). 17 The name derives from using the inverse regression of x on the sliced response y to estimate the reduction. For a univariate y, the method is particularly easy to implement, SIR's step functions being a simple nonparametric approximation to E(x|y).…”
Section: Sliced Inverse Regressionmentioning
confidence: 99%
“…The first two methods are based on the results in Theorem 1 while the third method relates to Theorem 2. In the Step 3 of Algorithm 1, one needs to determine an appropriate value for the dimension d. This can be done in several ways, either by the use of some test statistic [Bura and Yang, 2011] or via a straightforward inspection of a plot of the eigenvalues.…”
Section: Standardize the Regressorsmentioning
confidence: 99%
“…If the nonlinearity in (1) is even about the origin, then the conditional expectation (4) will be zero and no information regarding the projection will be present in the data. To remedy this limitation, several methods involving second order moments have been proposed in the literature, see, for example, Cook and Weisberg [1991], Li and Wang [2007], Bura and Yang [2011]. The evaluation of some of the above methods on systems from the Wiener class can be found in Lyzell and Enqvist [2011].…”
Section: Inverse Regressionmentioning
confidence: 99%
“…In the one-dimensional case, it is well known that if ϕ(t) is elliptically distributed, the linear least-squares estimate of B is consistent [Bussgang, 1952]. In this paper, we are going to consider a related method called sliced inverse regression (SIR) [Li, 1991], which has had a considerable influence in the field of dimension reduction in the statistical community [see, for example, Bura and Cook, 2001, Li and Wang, 2007, Bura and Yang, 2011.…”
Section: Introductionmentioning
confidence: 99%