1978
DOI: 10.1016/0378-3758(78)90021-6
|View full text |Cite
|
Sign up to set email alerts
|

On the stability of some replication variance estimators in the linear case

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
9
0

Year Published

1987
1987
2008
2008

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…This balanced orthogonal multi-array was used to apply the method and required only R = n = 240 replicates, which is the same as the jackknife. The randomly grouped balanced repeated replications method does as well in terms of relative bias, but, as is expected from the theoretical results of Krewski (1978), it is much less stable. Viewing Table 4, we see that for r, b and c the balanced orthogonal multi-array method and the jackknife method both perform well and similarly.…”
Section: R R S Irrermentioning
confidence: 52%
See 2 more Smart Citations
“…This balanced orthogonal multi-array was used to apply the method and required only R = n = 240 replicates, which is the same as the jackknife. The randomly grouped balanced repeated replications method does as well in terms of relative bias, but, as is expected from the theoretical results of Krewski (1978), it is much less stable. Viewing Table 4, we see that for r, b and c the balanced orthogonal multi-array method and the jackknife method both perform well and similarly.…”
Section: R R S Irrermentioning
confidence: 52%
“…If one wished to use the method of Gurney & Jewett (1975) in this situation, it would require a much larger number of replicates. Krewski (1978, Table 4 shows the observed value for the simulation was 13 27. For Yst the balanced orthogonal multi-array and the jackknife variance estimates are algebraically equivalent.…”
Section: R R S Irrermentioning
confidence: 99%
See 1 more Smart Citation
“…= W2/ (nh -1)-When nh = 2 for all h, a balanced half-samples variance estimator for ys, is (McCarthy, 1969) Vb(Yst) = R1 Er ( siSt i)9where yst is the mean of the rth half-sample Yh8(r,h)9 for r= 1,..., R, h = 1, . This unbiasedness result, which follows easily from (3) below, does not tell us the efficiency loss of VGB due to grouping Krewski (1978,. ., L,8(r, h) = 1, and respectively 2 if the (r, h)th entry of an R x L submatrix of a Hadamard matrix is 1 and respectively -1.…”
mentioning
confidence: 93%
“…[see Kreweski (1978)] where β 2h (y) = λ 40h = µ 40h µ 2 20h , λ rsh = µ rsh (µ r 20h µ s 02h ) 1/2 ,…”
Section: Introductionmentioning
confidence: 99%