Classification and Clustering 1977
DOI: 10.1016/b978-0-12-714250-0.50007-3
|View full text |Cite
|
Sign up to set email alerts
|

Distribution Problems in Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
52
0
1

Year Published

1979
1979
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 86 publications
(53 citation statements)
references
References 21 publications
0
52
0
1
Order By: Relevance
“…If the number of states in Markov-switching models is known, the EM algorithm yields consistent parameter estimates, and statistical inference proceeds via standard maximum-likelihood theory (e.g., Bickel, Ritov and Rydén 1998). If the number of states is not known, however, the likelihood ratio test to infer the true number of states breaks down because regularity conditions do not hold (see Hartigan 1977, Hansen 1992, Garcia 1998). …”
Section: Introductionmentioning
confidence: 99%
“…If the number of states in Markov-switching models is known, the EM algorithm yields consistent parameter estimates, and statistical inference proceeds via standard maximum-likelihood theory (e.g., Bickel, Ritov and Rydén 1998). If the number of states is not known, however, the likelihood ratio test to infer the true number of states breaks down because regularity conditions do not hold (see Hartigan 1977, Hansen 1992, Garcia 1998). …”
Section: Introductionmentioning
confidence: 99%
“…Given the low number of data available for the analysis, it was 267 considered that the loss of data through the use of firm slices was unacceptable and over-268 lapping ones were instead adopted. In addition, trials indicated that using firm time slices would Over-lapping time slices were created by using Moran's Local I test (Anselin, 1995) After data were allocated to the time slices, a partitioning clustering technique, K-means, was 290 implemented (Hartigan, 1975(Hartigan, , 1977. K-means clustering is a statistical method for grouping strength is faster and produces more discrete clusters.…”
mentioning
confidence: 99%
“…That is why the class of normal functions is most often chosen as an approximate model for the general set of random data; in this case, it is enough to evaluate only two parameters, and , for their sample. The mixtures of distributions on their basis, which possess the infinite differentiability, smoothness, and so forth are widely used at various simulations [15][16][17][18][19].…”
Section: Introductionmentioning
confidence: 99%