2020
DOI: 10.1214/20-sts701rej
|View full text |Cite
|
Sign up to set email alerts
|

Rejoinder: Sparse Regression: Scalable Algorithms and Empirical Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…For example, the best-subsets approach could be used for obtaining an initial solution to glasso, as an alternative to the l 1 -regulaized node-wise procedure of Meinshausen and Bühlmann (2006). Alternatively, the methods pioneered by Bertsimas and colleagues (Bertsimas & King, 2016; Bertsimas et al, 2016; Bertsimas et al, 2020a, 2020b) could ultimately become direct competitors for glasso.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, the best-subsets approach could be used for obtaining an initial solution to glasso, as an alternative to the l 1 -regulaized node-wise procedure of Meinshausen and Bühlmann (2006). Alternatively, the methods pioneered by Bertsimas and colleagues (Bertsimas & King, 2016; Bertsimas et al, 2016; Bertsimas et al, 2020a, 2020b) could ultimately become direct competitors for glasso.…”
Section: Discussionmentioning
confidence: 99%
“…The n = 125 setting is a smaller sample size than what is found in most applications in the network psychometrics literature, whereas n = 2,000 is comparable to the sample sizes of the two empirical examples. As noted by Bertsimas et al (2020a), good variable selection procedures should converge toward perfect sensitivity and specificity as sample size increases. Thus, we anticipate significant improvement in sensitivity and specificity as n increases over the range from 125 to 2,000.…”
Section: Design Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…A future direction of study will be to develop an efficient algorithm specialized for solving our MIQO problem. We are now working on extending several MIO-based high-performance algorithms [24,48,49] to sparse Poisson regression. Another direction of future research is to improve the performance of our methods for selecting tangent lines.…”
Section: Plos Onementioning
confidence: 99%
“…This 0 norm constrained MIO problem is non-convex and N P-hard [22], corresponding to the best subset selection in the larger statistics community [21,2]. The N P-hardness of the problem has contributed to the belief that discrete optimization problems were intractable [3]. For this reason, plenty of impressive sparsity-promoting techniques have focused on computationally feasible algorithms for solving the approximations, including Lasso [33], Elastic-net [37], nonconvex regularization [15,19] and stepwise regression [13].…”
Section: Introductionmentioning
confidence: 99%