1978
DOI: 10.1093/biomet/65.3.658
|View full text |Cite
|
Sign up to set email alerts
|

The efficiency of a linear discriminant function based on unclassified initial samples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

1983
1983
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(17 citation statements)
references
References 5 publications
0
17
0
Order By: Relevance
“…Cooper and Freeman [22] were optimistic enough about unlabeled data so as to title their work as "On the asymptotic improvement in the outcome of supervised learning provided by additional nonsupervised learning" . Other early studies, such as [23][24][25] further strengthened the assertion that unlabeled data should be used whenever available. Castelli [26] and Ratsaby and Venkatesh [27] showed that unlabeled data are always asymptotically useful for classification.…”
Section: Learning a Classifier From Labeled And Unlabeled Traininmentioning
confidence: 95%
See 1 more Smart Citation
“…Cooper and Freeman [22] were optimistic enough about unlabeled data so as to title their work as "On the asymptotic improvement in the outcome of supervised learning provided by additional nonsupervised learning" . Other early studies, such as [23][24][25] further strengthened the assertion that unlabeled data should be used whenever available. Castelli [26] and Ratsaby and Venkatesh [27] showed that unlabeled data are always asymptotically useful for classification.…”
Section: Learning a Classifier From Labeled And Unlabeled Traininmentioning
confidence: 95%
“…Castelli [26] and Ratsaby and Venkatesh [27] showed that unlabeled data are always asymptotically useful for classification. Krishnan and Nandy [19,20] extended the results of [25] to provide efficiency results for discriminant and logistic-normal models for samples that are labeled stochastically. It should be noted that such previous theoretical work makes the critical assumption that p(C, X)…”
Section: Learning a Classifier From Labeled And Unlabeled Traininmentioning
confidence: 99%
“…This is the approach recom mended in the statistical literature when classifying a population for which no training sample is available to determine the discriminant function [Bryant and Wil liamson, 1978;Ganesalingam and McLachlan, 1978]. As noted above, the discrimi nant analysis is related to maximum likeli hood estimation of the relationships.…”
Section: Discussionmentioning
confidence: 99%
“…To reveal latent population structure within the craniometric dataset and to produce the individual posterior probabilities, or coefficients of membership, needed to estimate admixture proportions and infer geographic ancestry in the absence of population identifiers and reference samples, this study exploits the unsupervised model-based clustering methods (MBC) of finite mixture analysis (FMA). Finite mixture models are powerful tools for probabilistic data analysis as they provide a principled, yet flexible, framework for the robust clustering of various distributions and at all levels of supervision (Ganesalingam and McLachlan, 1978;Banfield and Raftery, 1993;Schroeter et al, 1998;McLachlan and Peel, 2000;Peel and McLachlan, 2000). Here, it is assumed that the data is composed of a mixture of a finite number of underlying Gaussian probability distributions, with each component in the model corresponding directly some number of unobserved clusters or populations, k, which may be mutually exclusive or exhibit varying degrees of overlap Fraley and Raftery, 2002).…”
Section: Unsupervised Model-based Clusteringmentioning
confidence: 99%