2009
DOI: 10.1016/j.jksus.2009.07.003
|View full text |Cite
|
Sign up to set email alerts
|

Discriminating between gamma and lognormal distributions with applications

Abstract: In this paper, we discuss the use of the coefficient of skewness as a goodness-of-fit test to distinguish between the gamma and lognormal distributions. We also show the limitations of this idea. Next, we use the moments of order statistics from gamma distribution to adjust the correlation goodness-of-fit test. In addition, we calculate the power of the test based on some other alterative distributions including the lognormal distribution. Further, we show some numerical illustration. Finally, we apply the pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…The data given arose in tests on endurance of deep groove ball bearings. The data are number of million revolutions before failure for each of the lifetime tests and they are: 17 Here we consider Gamma model as the component of vector of densities F θ and log-normal as the component of F γ defined respectively in section 7. Therefore, to analyze a skewed positive data set an experimenter might wish to select one of them.…”
Section: Example With Real Datamentioning
confidence: 99%
“…The data given arose in tests on endurance of deep groove ball bearings. The data are number of million revolutions before failure for each of the lifetime tests and they are: 17 Here we consider Gamma model as the component of vector of densities F θ and log-normal as the component of F γ defined respectively in section 7. Therefore, to analyze a skewed positive data set an experimenter might wish to select one of them.…”
Section: Example With Real Datamentioning
confidence: 99%
“…There are a lot of works in this area. Some of them are Alzaid & Sultan [1], Kundu & Manglick [2], Bromideh and Valizadeh [3], Dey and Kundu [4], Dey and Kundu [5], Kundu [6], Kantam et al [7], Ngom, et al [8], Ravikumar and Kantam,[9], Qaffou and Zoglat, [10] and Algamal [11].…”
Section: Introductionmentioning
confidence: 99%
“…Considering a candidate model for some given data generated by an unknown probability distribution, the dissimilarity between those two probability distributions can be measured by the Kullback-Leibler divergence (KLD) introduced by Kullback and Leibler [9]. Since the true density is unknown, various criteria and hypothesis testing were used for model selection purpose ( [10,11,12,13,14,15,16,17,18]). In this paper, we shall derive a strongly consistent estimator of KLD between two distributions based on the bais reduced kernel density estimator.…”
Section: Introductionmentioning
confidence: 99%