2022
DOI: 10.3390/math10193695
|View full text |Cite
|
Sign up to set email alerts
|

Recent Advances on Penalized Regression Models for Biological Data

Abstract: Increasingly amounts of biological data promote the development of various penalized regression models. This review discusses the recent advances in both linear and logistic regression models with penalization terms. This review is mainly focused on various penalized regression models, some of the corresponding optimization algorithms, and their applications in biological data. The pros and cons of different models in terms of response prediction, sample classification, network construction and feature selecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 134 publications
(238 reference statements)
0
1
0
Order By: Relevance
“…The same dataset was trained in elastic net regression analysis to confirm the predicted markers in PCA and informed decision-making against the dependent variable by score. The elastic net regression model demonstrated robust performance in handling multicollinearity and selecting relevant predictors, leading to improved predictive accuracy compared to traditional regression techniques [58]. The objective was to identify the combination that minimizes the model's error metrics, such as mean squared error (MSE) or R squared.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The same dataset was trained in elastic net regression analysis to confirm the predicted markers in PCA and informed decision-making against the dependent variable by score. The elastic net regression model demonstrated robust performance in handling multicollinearity and selecting relevant predictors, leading to improved predictive accuracy compared to traditional regression techniques [58]. The objective was to identify the combination that minimizes the model's error metrics, such as mean squared error (MSE) or R squared.…”
Section: Discussionmentioning
confidence: 99%
“…The correlations were interpreted based on the guidelines provided by Hinkle et al in 2003 [60], categorizing correlations as very high positive (negative) correlation (±0.90 to 1.00), high positive (negative) correlation (±0.70 to 0.90), moderately positive (negative) correlation (±0.50 to 0.70), low positive The same dataset was trained in elastic net regression analysis to confirm the predicted markers in PCA and informed decision-making against the dependent variable by score. The elastic net regression model demonstrated robust performance in handling multicollinearity and selecting relevant predictors, leading to improved predictive accuracy compared to traditional regression techniques [58]. The objective was to identify the combination that minimizes the model's error metrics, such as mean squared error (MSE) or R squared.…”
Section: Discussionmentioning
confidence: 99%
“…[16][17][18] It offers a distinct advantage in mitigating the risk of over-fitting and reducing model variance through simultaneous feature selection and regularization. 19 LASSO proves particularly valuable when dealing with problems that involve numerous features (the extensive proteomics data per sample) but a relatively small sample size (the limited number of participants that meet the recruitment criteria for a cohort study). 20 In this cohort study, we assessed the neurological recoveries of patients who underwent intramedullary surgery, monitored changes in CSF protein profiles in these patients using TMT-MS analysis, and employed LASSO penalized regression modeling for identification of CSF protein sets associated with spinal function recovery in the corresponding patient populations.…”
Section: Introductionmentioning
confidence: 99%
“…Hereinafter, we briefly review some related works on omics data analysis. First of all, massive omics data often contain too many covariates but with only a few samples, considerable actually uncorrelated or independent covariates greatly hinder the subsequent analysis and applications ( Wang et al, 2022 ). Therefore, it is necessary to perform dimensional reduction or variable pre-filtering.…”
Section: Introductionmentioning
confidence: 99%