2020
DOI: 10.1080/01621459.2020.1840989
|View full text |Cite
|
Sign up to set email alerts
|

A Tuning-free Robust and Efficient Approach to High-dimensional Regression

Abstract: We introduce a novel approach for high-dimensional regression with theoretical guarantees. The new procedure overcomes the challenge of tuning parameter selection of Lasso and possesses several appealing properties. It uses an easily simulated tuning parameter that automatically adapts to both the unknown random error distribution and the correlation structure of the design matrix. It is robust with substantial efficiency gain for heavy-tailed random errors while maintaining high efficiency for normal random e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(31 citation statements)
references
References 53 publications
3
28
0
Order By: Relevance
“…We note that one can add more losses into our framework, for example those in Black and Rangarajan (1996); Basu et al (1998) or Wang et al (2020). Also, our framework is not limited to linear regression.…”
Section: Robust Regression With Tukey's Lossmentioning
confidence: 99%
“…We note that one can add more losses into our framework, for example those in Black and Rangarajan (1996); Basu et al (1998) or Wang et al (2020). Also, our framework is not limited to linear regression.…”
Section: Robust Regression With Tukey's Lossmentioning
confidence: 99%
“…where pðÁÞ is the penalty function, such as ridge penalty and lasso penalty. Some typical cases of the general formulate can be detailed in [51][52][53].…”
Section: The Robust Penalized Statistical Frameworkmentioning
confidence: 99%
“…For simplicity, we call the problem (1) the tuning-free robust Lasso problem. It has been shown in [40] that the model ( 1) is very close to the Lasso for normal random errors and is robust with substantial efficiency gain for heavy-tailed errors.…”
Section: Introductionmentioning
confidence: 99%
“…Another obstacle is that the Gaussian or sub-Gaussian error assumption can hardly be satisfied for high-dimensional microarray data, climate data, insurance claim data, e-commerce data and many other applications due to the heavy-tailed errors, which can affect the choice of λ and result in misleading results if we apply the standard procedures directly. Therefore, Wang et al [40] studied the following ℓ 1 regularized tuning-free robust regression model min…”
Section: Introductionmentioning
confidence: 99%