1976
DOI: 10.2307/2285332
|View full text |Cite
|
Sign up to set email alerts
|

The Generalized Jackknife: Finite Samples and Subsample Sizes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

1976
1976
2014
2014

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Comparison of( I 0), the expansion of the jackknifed estimator iJ, shows that 0and 0differ in a very minor way. Indeed Thorburn (1976) has shown that as n -+ 00, lim inf(var (O)/var(0))~I, which is the price paid for bias reduction. The operation of jackknifing alone will not induce asymptotic normality of (4).…”
Section: As In Hinkleymentioning
confidence: 99%
See 1 more Smart Citation
“…Comparison of( I 0), the expansion of the jackknifed estimator iJ, shows that 0and 0differ in a very minor way. Indeed Thorburn (1976) has shown that as n -+ 00, lim inf(var (O)/var(0))~I, which is the price paid for bias reduction. The operation of jackknifing alone will not induce asymptotic normality of (4).…”
Section: As In Hinkleymentioning
confidence: 99%
“…THE idea of splitting a sample of size ninto g groups ofsize heach in order to red uce bias, where one by one each group has a turn to be removed from the sample, was introduced by Quenouille (1949,1956). In what is to follow we will stay with the case n = g, h = I, although everything we do can be done equally for h » I; Sharot (1976) showed that in terms of mean squared error, h = I is the best choice. Let Y I , ... , Y n denote the sample, distributed according to a distribution function F that depends on an unknown parameter e. We wish to estimate or test £J; suppose 8( Y I , .. ·, Y n ) is such an estimate, and that 0_i is the estimate obtained by applying the estimation procedure to the sample with the ith random variable removed, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, 0(2) may be theoretically less biased than other estimators; see Sharot (1976) for a comparison with 0(1). First, the infinitesimal jackknife is not a true jackknife in the sense that it is not based on reapplications of the original estimator t to subsamples of the data and thus its applicability may not be as wide as for the other jackknives.…”
Section: Discussionmentioning
confidence: 99%
“…For this purpose, different levels of imbalance were applied to the data set which corresponded to 58 single cross hybrids. The cross-validation was performed by resampling a group of individuals using the generalized Jacknife procedure [ 23 ]. The generalized Jacknife method is based on dividing the sample data set C into g groups of equal size k , so that C = gk .…”
Section: Methodsmentioning
confidence: 99%