1978
DOI: 10.2307/2335925
|View full text |Cite
|
Sign up to set email alerts
|

The Efficiency of a Linear Discriminant Function Based on Unclassified Initial Samples

Abstract: JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Biometrika Trust is collaborating with JSTOR to digitize, preserve and extend access to Biometrika. SUMMARY Estimation of the optimal linear discriminant function is considere… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
20
0

Year Published

1985
1985
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(21 citation statements)
references
References 0 publications
1
20
0
Order By: Relevance
“…This measure of asymptotic relative efficiency was used by O'Nei11 (1978) and Ganesalingam and McLachlan (1978).…”
Section: Relative Efficiency Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This measure of asymptotic relative efficiency was used by O'Nei11 (1978) and Ganesalingam and McLachlan (1978).…”
Section: Relative Efficiency Resultsmentioning
confidence: 99%
“…In all these studies the underlying populations are assumed to be normal. Cianesalingarn and McLachlan (1979) found from simulation cxperitnents that the mixture discriminant function performed satisfactorily although the maximum likelihood estimates of the parameters cstirnatcd from the mixed samplcs werc poor, O'Neill (1978) and Ganesalingam and McLachlan (1978) prcsent only asymptotic results.…”
Section: Introductionmentioning
confidence: 99%
“…Note that Eq. (12) can be used to derive the optimal boundaries for any priors . Table 1 shows the Bayes error rate, the optimal boundaries b * and the error rates for the three distributions.…”
Section: Nl For Two Gaussiansmentioning
confidence: 99%
“…Following Rao's (1948) article likelihood estimation appears not to have been pursued further until Hasselblad (1966Hasselblad ( , 1969 addressed the problem, initially for a mixture of g univariate normal distributions with equal variances. The likelihood approach to the fitting of mixture models, in particular normal mixtures, has since been utilized by several authors, including Hosmer (1973aHosmer ( , 1973bHosmer ( , 1974Hosmer ( , 1978, O'neill (1978), and Ganesalingam and Mclachlan (1978, 1979, 1980. Butler (1986) noted that Jeffreys (1932) used essentially the Expectation-Maximization (EM) algorithm in iteratively computing the estimates of the means of two univariate normal population, which had known variances and which were mixed in known proportions.…”
Section: Introductionmentioning
confidence: 99%