1970
DOI: 10.1109/tit.1970.1054415
|View full text |Cite
|
Sign up to set email alerts
|

Bounds on performance of optimum quantizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0
1

Year Published

1974
1974
2015
2015

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(7 citation statements)
references
References 22 publications
(46 reference statements)
0
6
0
1
Order By: Relevance
“…Univariate sample K ‐means is encountered when the goal is to divide an N ×1 vector, x , into K groups. Elias (1970) showed that the population partition is optimal when the within‐cluster sums of squares are equal. Based on work in Wong (1982a) on population K ‐means clusters, Wong (1984) derived two double asymptotic theorems ( K →∞ and N →∞, indicating that the length of the cluster intervals approaches zero while the size of each cluster approaches infinity) for univariate sample K ‐means clusters: (a) the sample cluster within‐sums of squares are equal; and (b) the size of the cluster intervals is inversely proportional to the cube root of the underlying density at the midpoints (which are sufficiently close to the mean in large samples) of the intervals.…”
Section: Theoretical Results Concerning K‐meansmentioning
confidence: 99%
See 1 more Smart Citation
“…Univariate sample K ‐means is encountered when the goal is to divide an N ×1 vector, x , into K groups. Elias (1970) showed that the population partition is optimal when the within‐cluster sums of squares are equal. Based on work in Wong (1982a) on population K ‐means clusters, Wong (1984) derived two double asymptotic theorems ( K →∞ and N →∞, indicating that the length of the cluster intervals approaches zero while the size of each cluster approaches infinity) for univariate sample K ‐means clusters: (a) the sample cluster within‐sums of squares are equal; and (b) the size of the cluster intervals is inversely proportional to the cube root of the underlying density at the midpoints (which are sufficiently close to the mean in large samples) of the intervals.…”
Section: Theoretical Results Concerning K‐meansmentioning
confidence: 99%
“…Univariate K-means Univariate sample K-means is encountered when the goal is to divide an N £ 1 vector, x, into K groups. Elias (1970) showed that the population partition is optimal when the within-cluster sums of squares are equal. Based on work in Wong (1982a) on population K-means clusters, Wong (1984) derived two double asymptotic theorems (K !…”
Section: Theoretical Results Concerning K-meansmentioning
confidence: 99%
“…Other realizations of the generalized scalar quantizer, such as quantization with memory (by means of feedback) and quantization with memory and "preview", will be analyzed in Sections 6 and 7, respectively. For a more comprehensive analysis of quantization see, e.g., [45,16,17].…”
Section: Quantizationmentioning
confidence: 99%
“…First, the R-D relationship is mainly derived based on the distribution and quantizer that are convenient to calculate [7]. Second, the R-D performance is mostly observed in ideal conditions like extremely high or low bitrate [8], [9]. In contrast, the R-D analysis over the real coding environment, i.e.…”
Section: Introductionmentioning
confidence: 99%