1982
DOI: 10.2307/2335985
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Score Functions: Some Optimality Results

Abstract: JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Biometrika Trust is collaborating with JSTOR to digitize, preserve and extend access to Biometrika. SUMMARYThe conditional score function has previously been shown to generate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

1990
1990
2015
2015

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(29 citation statements)
references
References 3 publications
0
29
0
Order By: Relevance
“….Y i i /, r D 1; 2; i D 1; 2; : : : N , inducing the need to estimate˛. From Lindsay [14], Hansen [3], and Small and McLeish [11], Qu et al [2] show that because C N .ˇ/ † N p ! 0, the estimating equations given in Equation (6) are fully efficient when the covariance structure is correctly specified and they are in the same class as GEE, and always optimal in the Löwner ordering among estimating equations within their given class.…”
Section: Quadratic Inference Functionsmentioning
confidence: 99%
“….Y i i /, r D 1; 2; i D 1; 2; : : : N , inducing the need to estimate˛. From Lindsay [14], Hansen [3], and Small and McLeish [11], Qu et al [2] show that because C N .ˇ/ † N p ! 0, the estimating equations given in Equation (6) are fully efficient when the covariance structure is correctly specified and they are in the same class as GEE, and always optimal in the Löwner ordering among estimating equations within their given class.…”
Section: Quadratic Inference Functionsmentioning
confidence: 99%
“…In particular, unbiasedness of the estimating equation essentially guarantees consistency, and the condition on the derivative matrix guarantees asymptotic optimality within a limited class of estimating functions. For further details we refer the reader to Godambe (1960), Godambe and Thompson (1974), Lindsay (1982) and Godambe and Heyde (1987).…”
Section: Justification Of Adjusted Profile Likelihoodmentioning
confidence: 99%
“…Our goal is to adjust the profile log-likelihood so that the mean of the score function is zero and the variance of the score function equals its negative expected derivative matrix. In the terminology of Godambe (1960) and Lindsay (1982), our goal is to adjust the profile log-likelihood score function so that it is unbiased and information unbiased. The hope is that, by making these adjustments, the asymptotic behaviour of the quantities derived from the likelihood (e.g.…”
Section: I Introductionmentioning
confidence: 99%
“…ˇ/ is the empirical covariance matrix for the extended score equations. Minimizing the value of Q N .ˇ/ to estimateˇis asymptotically equivalent to solving the optimal estimating equations [6,7,14,15]. In practice, C N .ˇ/ replaces the optimal † N D EOEC N .ˇ/, as C N .ˇ/ † N p !…”
Section: Quadratic Inference Functionsmentioning
confidence: 99%