2020
DOI: 10.1016/j.amc.2020.125500
|View full text |Cite
|
Sign up to set email alerts
|

Safe feature screening rules for the regularized Huber regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…(i) Let β * (λ) = 0. By Theorem 3.1 and the KKT system (5), we know that y = α, θ * (λ) ∈ F and λ ≥ X T θ max ∞ with any θ max ∈ F defined in (6). Because F is a set, λ needs to satisfy that λ ≥ max…”
Section: Theorem 31 (Strong Duality Theorem)mentioning
confidence: 99%
See 1 more Smart Citation
“…(i) Let β * (λ) = 0. By Theorem 3.1 and the KKT system (5), we know that y = α, θ * (λ) ∈ F and λ ≥ X T θ max ∞ with any θ max ∈ F defined in (6). Because F is a set, λ needs to satisfy that λ ≥ max…”
Section: Theorem 31 (Strong Duality Theorem)mentioning
confidence: 99%
“…For robust loss functions, there are a few screening rules. For instance, Chen et al [6] proposed the safe screening rules for the regularized Huber regression. To the best of our knowledge, existing screening rules mainly focus on regularized models with differentiable loss functions, such as the quadratic function, logistic function and Huber function.…”
Section: Introductionmentioning
confidence: 99%