2020
DOI: 10.3724/sp.j.1042.2020.01777
|View full text |Cite
|
Sign up to set email alerts
|

Lasso回归:从解释到预测

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 75 publications
0
12
0
Order By: Relevance
“…As the regularization term of punishment, the l 1 norm has a stronger capacity to sparse regression coefficient vector and can compress the estimated coefficient of the redundant predictor variable to 0, which plays the role of variable screening at the same time as the compression coefficient. Therefore, it can effectively avoid the problem of insufficient generalization ability of the model caused by overfitting and obtain a simpler model with higher prediction efficiency [ 48 ]. Compared with ridge regression, lasso regression has a stronger function of variable selection and has the function of dimension reduction in high-dimensional space.…”
Section: Methodsmentioning
confidence: 99%
“…As the regularization term of punishment, the l 1 norm has a stronger capacity to sparse regression coefficient vector and can compress the estimated coefficient of the redundant predictor variable to 0, which plays the role of variable screening at the same time as the compression coefficient. Therefore, it can effectively avoid the problem of insufficient generalization ability of the model caused by overfitting and obtain a simpler model with higher prediction efficiency [ 48 ]. Compared with ridge regression, lasso regression has a stronger function of variable selection and has the function of dimension reduction in high-dimensional space.…”
Section: Methodsmentioning
confidence: 99%
“…Based on the loss function of the least squares (ordinary least square (OLS)), Lasso uses the sum of the absolute values of the regression coefficients as a penalty function to compress the regression coefficients. When the sum of the absolute values of the regression coefficients is small enough, some regression coefficients can be compressed to zero, and then the variables with zero coefficients can be eliminated, so as to achieve the effect of variable selection [9]. Assumed linear regression model X 0 � βX + ε, where X 0 is the system behavior characteristic vector, X is the influence factor variable matrix, and β is the coefficient vector, then the coefficient estimation of Lasso method is as follows:…”
Section: Pso-gm (1 N) Modelmentioning
confidence: 99%
“…The proposed method provides the most accurate and unbiased estimation by the sum of the minimum residuals (30), but it also has some shortcomings, such as overfitting results and poor predictions on future observations (31). In fact, those problems are even more serious when there are many predictors in a regression model (29,32,33).…”
Section: Introductionmentioning
confidence: 99%
“…Supervised learning is divided into classification and regression (34). Lasso regression (32,38) used in this study is a kind of regression that used least angle regression algorithm instead of least squares.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation