1998
DOI: 10.1016/s0893-9659(98)00064-0
|View full text |Cite
|
Sign up to set email alerts
|

Estimators based on sample quantiles using (h, φ)-entropy measures

Abstract: A point estimation procedure based on the maximum entropy principle for (h, ¢) entropies is proposed using sample quantiles. These estimators are efficient and asymptotically normal under standard regularity conditions. A test for goodness-of-fit is constructed, being the corresponding statistic asymptotically distributed chi-squared. These results generalize the results obtained in [1]. ~

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2005
2005
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…These authors based their proposal in the well-known Shannon entropy. In the past there has been an extensive work on generalized entropies [30][31][32][36][37][38][39]. We focus on the Havrda-Charvat entropy, which reduces to the Shannon case if the parameter is set to one, to extend that surrogacy measure.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These authors based their proposal in the well-known Shannon entropy. In the past there has been an extensive work on generalized entropies [30][31][32][36][37][38][39]. We focus on the Havrda-Charvat entropy, which reduces to the Shannon case if the parameter is set to one, to extend that surrogacy measure.…”
Section: Discussionmentioning
confidence: 99%
“…The optimal choice of α remains a research question. Additional research can consider other ITMA, such as divergence measures [ 36 ], taking into account that the mutual information is equal to the Kullback divergence, or measures of unilateral dependency as that defined by Andonie et al [ 37 ] based on the informational energy [ 39 ] or surrogacy for testing of variances [ 38 ].…”
Section: Discussionmentioning
confidence: 99%
“…Numerical examples have indicated that the MEE criterion could achieve a better error distribution, especially for higher-order statistics in the adaptive system training [3]. In the literature of statistical information theory, there are many other important entropy measures different from Shannon's and Renyi's definitions [12][13][14]. Similar to the MMSE criterion, Shannon's entropy is also not always the optimum entropy criterion.…”
mentioning
confidence: 99%