2014
DOI: 10.1016/j.jspi.2014.02.001
|View full text |Cite
|
Sign up to set email alerts
|

Regularized multivariate regression models with skew-t error distributions

Abstract: We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L 1 -penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t dist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…First, other penalty functions for multivariate model estimations can be applied, such as master predictor penalty, 36 L 2 SVS, 37 and L ∞ SVS 38 . Second, the assumption of a multivariate normal distribution on Yi can be extended to other distributions, such as the multivariate skew‐normal distribution 39 and the multivariate skew‐ t distribution, 40 to allow the presence of skewness and kurtosis in data. Third, missing values in the GDSC data were imputed by random forest imputation algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…First, other penalty functions for multivariate model estimations can be applied, such as master predictor penalty, 36 L 2 SVS, 37 and L ∞ SVS 38 . Second, the assumption of a multivariate normal distribution on Yi can be extended to other distributions, such as the multivariate skew‐normal distribution 39 and the multivariate skew‐ t distribution, 40 to allow the presence of skewness and kurtosis in data. Third, missing values in the GDSC data were imputed by random forest imputation algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…Because the RR estimate is well-defined, including the case when the predictors are collinear, we use its -norm to scale the convergence criterion for our regression coefficient matrix . In addition, we use the sample covariance matrix of the RR residual to scale the convergence of the precision matrix ( Chen et al , 2014 ). This implies that the convergence criteria for and are met when and , respectively.…”
Section: Methodsmentioning
confidence: 99%
“…This method could be used to explore associations between SNPs and neuroimaging measures by separately estimating individual neuroimaging measures based on a set of SNPs under study. In our experiments, we firstly conducted RMRR between p SNPs and q ROIs to select top SNPs using all the neuroimaging features (Chen et al 2014). Particularly, the top SNPs were selected based on the absolute values of the regression coefficient matrix row-by-row.…”
Section: Methodsmentioning
confidence: 99%