2015
DOI: 10.15672/hjms.20158812908
|View full text |Cite
|
Sign up to set email alerts
|

Two Different Shrinkage Estimator Classes for the Shape Parameter of Classical Pareto Distribution

Abstract: In this study, biased estimators for the shape parameter of a classical Pareto distribution are proposed using two different shrinkage techniques which give a smaller mean square error than an unbiased estimator. Then these obtained biased estimators are compared with the unbiased estimator by the means of their mean square error.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In 1977, Richard F. Gunst and Robert L. Mason used MSE (Mean Squared Error) criterion to compare five estimators of regression coefficients (least squares, principal components, ridge regression, latent root, and shrunken estimator). In this context, each of the biased estimators displayed improvement in mean squared error over least squares for a wide range of options of the parameters of the model [11][12][13][14]. Also, the results of a simulation encompassing all five estimators pointed out that the principal components and latent root estimators perform best overall, while the ridge regression estimator has the possibility of a minor mean square error than either of these [11,15].…”
Section: Literature Reviewmentioning
confidence: 98%
See 1 more Smart Citation
“…In 1977, Richard F. Gunst and Robert L. Mason used MSE (Mean Squared Error) criterion to compare five estimators of regression coefficients (least squares, principal components, ridge regression, latent root, and shrunken estimator). In this context, each of the biased estimators displayed improvement in mean squared error over least squares for a wide range of options of the parameters of the model [11][12][13][14]. Also, the results of a simulation encompassing all five estimators pointed out that the principal components and latent root estimators perform best overall, while the ridge regression estimator has the possibility of a minor mean square error than either of these [11,15].…”
Section: Literature Reviewmentioning
confidence: 98%
“…The parameter 𝜆 in equation ( 21) is estimated through the Cross-Validation technique to find a suitable Value for that parameter. [9] [14] The suitable or appropriate value of parameter 𝜆 means that value that contributes to predicting the values of the response variable with the highest possible accuracy (less variance). To perform the Cross-validation technique, the data set is first divided into two subsets or folds (Say Q) of approximately equal size.…”
Section: Least Absolute Shrinkage and Selection Operator (Lasso)mentioning
confidence: 99%
“…Thompson (1968) suggested a shrinkage method by multiplying the best linear unbiased estimator (BLUE) by a shrinking factor to obtain an estimator with a smaller MSE than the BLUE. Shrinkage estimators are considered a lot of studies in literature as Metha and Srinivasan (1971) gave estimation of the mean by shrinkage to a point, Govindarajulu and Sahai (1972) studied on estimating parameters of normal distribution, Bhatnagar (1986) propose to use variance estimating mean, Singh and Katyar (1988) proposed a generalized class of estimators for parameters of normal distribution, Singh (1990) also studied on estimating parameters of normal distributions, Jani (1991) suggested a class of shrinkage estimators for the scale parameter of exponential distribution, Singh and Singh (1997) and Singh and Saxena (2003) studied on shrinkage estimation for the variance of a normal population, Singh and Saxena (2008) gave a family of shrinkage estimators for Weibull shape parameter, Özdemir and Ebegil (2012) proposed shrinkages estimators for the shape parameter of pareto distribution, Mehta and Singh (2014) suggested shrinkage estimators of parameters of morgenstern type bivariate logistic distribution, Singh and Mehta (2016) studied on a class of shrinkage estimators of scale parameter of uniform distribution based on k-record values, Ebegil and Özdemir (2016) proposed two different shrinkage estimator classes for the shape parameter of classical pareto distribution, Balui et al (2020) gave two different shrinkage estimator classes for the scale parameter of classical rayleigh distribution and, Vishwakarma and Gupta (2022) proposed shrinkage estimator for scale parameter of gamma distribution.…”
Section: Introductionmentioning
confidence: 99%