Applications of regression models for binary response are very common and models specific to these problems are widely used. Quantile regression for binary response data has recently attracted attention and regularized quantile regression methods have been proposed for high dimensional problems. When the predictors have a natural group structure, such as in the case of categorical predictors converted into dummy variables, then a group lasso penalty is used in regularized methods. In this paper, we present a Bayesian Gibbs sampling procedure to estimate the parameters of a binary quantile regression model under a group lasso penalty. Simulated and real data show a good performance of the proposed method in comparison to mean-based approaches and to quantile-based approaches which do not exploit the group structure of the predictors.
Networks appear in many fields, from finance to medicine, engineering, biology and social science. They often comprise of a very large number of entities, the nodes, and the interest lies in inferring the interactions between these entities, the edges, from relatively limited data. If the underlying network of interactions is sparse, two main statistical approaches are used to retrieve such a structure: covariance modeling approaches with a penalty constraint that encourages sparsity of the network, and nodewise regression approaches with sparse regression methods applied at each node. In the presence of outliers or departures from normality, robust approaches have been developed which relax the assumption of normality. Robust covariance modeling approaches are reviewed and compared with novel nodewise approaches where robust methods are used at each node. For lowdimensional problems, classical deviance tests are also included and compared with penalised likelihood approaches. Overall, copula approaches are found to perform best: they are comparable to the other methods under an assumption of normality or mild departures from this, but they are superior to the other methods when the assumption of normality is strongly violated.
In this paper, we reviewed some variable selection methods in linear regression model. Conventional methodologies such as the Ordinary Least Squares (OLS) technique is one of the most commonly used method in estimating the parameters in linear regression. But the OLS estimates performs poorly when the dataset suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. To address this problem, robust regularized regression methods like Huber Lasso (Rosset and Zhu, 2007) and quantile regression (Koenker and Bassett ,1978] were proposed. This paper focuses on comparing the performance of the seven methods, the quantile regression estimates, the Huber Lasso estimates, the adaptive Huber Lasso estimates, the adaptive LAD Lasso, the Gamma-divergence estimates, the Maximum Tangent Likelihood Lasso (MTE) estimates and Semismooth Newton Coordinate Descent Algorithm (SNCD ) Huber loss estimates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.