2018
DOI: 10.48550/arxiv.1808.02526
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MIP-BOOST: Efficient and Effective $L_0$ Feature Selection for Linear Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…This is one argument commonly used against exact (cardinality-constrained) sparse regression formulations: the sparsity parameters might not be known, in which case they need to be cross-validated hence resulting in a dramatic increase in the required computational effort. Although, in many cases, such parameters are determined by the application, Kenney et al (2018) address such concerns by proposing efficient cross validation strategies.…”
Section: Experiments On Synthetic Datasetsmentioning
confidence: 99%
“…This is one argument commonly used against exact (cardinality-constrained) sparse regression formulations: the sparsity parameters might not be known, in which case they need to be cross-validated hence resulting in a dramatic increase in the required computational effort. Although, in many cases, such parameters are determined by the application, Kenney et al (2018) address such concerns by proposing efficient cross validation strategies.…”
Section: Experiments On Synthetic Datasetsmentioning
confidence: 99%
“…Since the function to minimize does not depend on k, any piece-wise linear lower approximation of c(s) computed to solve (3) for some value of k can be reused to solve the problem at another sparsity level. In recent work, Kenney et al [32] proposed a combination of implementation recipes to optimize such search procedures. As for γ, we apply the procedure described in Chu et al [11], starting with a low value γ 0 (typically scaling as 1/max i x i…”
Section: Implementation and Publicly Available Codementioning
confidence: 99%
“…When used in conjunction with warm starts from a projected gradient descent method, showed that their mixed-integer optimization approach for best subsets can be applied to problems with dimensions as large as p ≈ 1000. This development represents the first time that the best subsets estimator has been tractable for contemporary high-dimensional data after at least 50 years of literature and has paved the way for exciting new research Bertsimas et al 2019;Hastie et al 2019;Hazimeh and Mazumder 2019;Kenney et al 2019;Kreber 2019;Bertsimas and Van Parys 2020;Takano and Miyashiro 2020).…”
Section: Introductionmentioning
confidence: 99%