A minimum divergence estimation method is developed for robust parameter estimation and model fitting. The proposed approach uses new density-based divergences which, unlike existing density-based minimum divergence methods (e.g. minimum Hellinger distance estimation), avoid the use of nonparametric density estimation and associated complications such as bandwidth selection. The proposed class of 'density power divergences' is indexed by a single parameter a which can be varied to study the trade-off between robustness and efficiency. The method can be viewed as a robust extension of maximum likelihood estimation, since the class of divergences contains the Kullback-Leibler divergence when a = 0. Choices of a near zero afford robustness while retaining efficiency close to that of maximum likelihood.
In real life we often have to deal with situations where the sampled observations are independent and share common parameters in their distribution but are not identically distributed. While the methods based on maximum likelihood provide canonical approaches for doing statistical inference in such contexts, it carries with it the usual baggage of lack of robustness to small deviations from the assumed conditions. In the present paper we develop a general estimation method for handling such situations based on a minimum distance approach which exploits the robustness properties of the density power divergence measure (Basu et al. 1998 [2]). We establish the asymptotic properties of the proposed estimators, and illustrate the benefits of our method in case of linear regression.
In this paper, the problem of speech enhancement when only corrupted speech signal is available for processing is considered.For this, the Kalman filtering method is studied and compared with the Wiener filtering method.Its performance is found to be significantly better than the Wiener filtering method. A delayed-Kalman filtering method is also proposed which improves the speech enhancement performance of Kalman filter further.
This paper compares the minimum divergence estimator of Basu, Harris, Hjort and Jones (1998) to a competing minimum divergence estimator which turns out to be equivalent to a method proposed from a different perspective by Windham (1995). Both methods can be applied for any parametric model, contain maximum likelihood as a special case, and can be extended to the context of regression situations. Theoretical calculations are given to compare efficiencies under model conditions, and robustness properties are studied and compared. Overall the two methods are found to perform quite similarly. Some relatively small advantages of the former method over the latter are identified.
The generalized linear model is a very important tool for analyzing real data in several application domains where the relationship between the response and explanatory variables may not be linear or the distributions may not be normal in all the cases. Quite often such real data contain a significant number of outliers in relation to the standard parametric model used in the analysis; in such cases inference based on the maximum likelihood estimator could be unreliable. In this paper, we develop a robust estimation procedure for the generalized linear models that can generate robust estimators with little loss in efficiency. We will also explore two particular special cases in detail-Poisson regression for count data and logistic regression for binary data. We will also illustrate the performance of the proposed estimators through some real-life examples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.