1975
DOI: 10.1002/j.2333-8504.1975.tb01046.x
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Likelihood Estimation by Means Op Nonlinear Least Squares

Abstract: Methods are given for using readily available nonlinear regression programs to produce maximum likelihood estimates in a rather natural way. Used as suggested the common Gauss‐Newton algorithm for nonlinear least squares becomes the Fisher scoring algorithm for maximum likelihood estimation. In some cases it is also the Newton‐Raphson algorithm. The standard errors produced are the information theory standard errors up to a possible common multiple. This means that much of the auxiliary output produced by a no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
48
0

Year Published

1981
1981
2007
2007

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(48 citation statements)
references
References 6 publications
0
48
0
Order By: Relevance
“…Model (2) postulates that the error is additive, and is normally distributed over 2 (within each subject). This error replications of judgments with constant variance ok model was implicit in the initial scaling phase of classical MDS [Torgerson, 1952].…”
Section: Likelihood Functionmentioning
confidence: 99%
“…Model (2) postulates that the error is additive, and is normally distributed over 2 (within each subject). This error replications of judgments with constant variance ok model was implicit in the initial scaling phase of classical MDS [Torgerson, 1952].…”
Section: Likelihood Functionmentioning
confidence: 99%
“…Let the expectation of y be p and the covariance matrix of y be A. Two remarkable properties of the regular exponential likelihood function have been derived by Jennrich and Moore (1975 …”
Section: The Estimation Methods and Relevant Statistical Theorymentioning
confidence: 99%
“…Note that where the number of degrees of freedom V equals the number of elements in y minus the number of elements in 6. The motivations for this correction are that the results are analogous to the standard errors in nonlinear least squares problems, and that it does not affect the nice asymptotic properties, because S2 approaches one as the sample size approaches infinity (Jennrich and Moore 1975). Note that without this correction, a variable that contributes practically nothing to the reduction in S2 is sometimes found to have a t-ratio of large magnitude, say, about 4 or 5.…”
Section: The Estimation Methods and Relevant Statistical Theorymentioning
confidence: 99%
“…The program utilizes the Gauss-Newton method for non-linear least squares estimation by modifying the scoring algorithm to accommodate maximum likelihood problems; see Jennrich and Moore (1975) or Petersen (1986b) for details. The parameter estimates and their estimated standard deviations are given in Table 1. The calculation of the 'residuals' ri = -log [S(ti; zi; 0, ,B)] (see, for example, Cox andOakes (1984), p. 89, or Kalbfleisch andPrentice (1980), p. 96) of model 1 and plotting r against the logarithm of the proportion of residuals exceeding r shows a good fit to a straight line with slope -1.…”
Section: Empirical Analysismentioning
confidence: 99%