2002
DOI: 10.1201/9781420035933
|View full text |Cite
|
Sign up to set email alerts
|

Subset Selection in Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
674
0
10

Year Published

2005
2005
2013
2013

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 962 publications
(692 citation statements)
references
References 0 publications
5
674
0
10
Order By: Relevance
“…This worked well and the performance reported here is comparable to those models 4 . It is interesting that the terms selected by the present method match, quite closely, those of the earlier models.…”
Section: Resultssupporting
confidence: 82%
See 1 more Smart Citation
“…This worked well and the performance reported here is comparable to those models 4 . It is interesting that the terms selected by the present method match, quite closely, those of the earlier models.…”
Section: Resultssupporting
confidence: 82%
“…This method is a bottom-up search procedure, where the term floating identifies that the number of features changes dynamically, with one feature included and/or excluded, at each iteration. A fairly comprehensive treatment of the question of subset selection [4] describes a number of other methods at some length and touches on the more recent focus on "datadriven" approaches, whereby a modified (regularized) objective function that in some way penalizes the inclusion of terms that have low value is optimized e.g. via a maximum likelihood procedure.…”
Section: Introductionmentioning
confidence: 99%
“…The research on sparse and regularized solutions has gained increasing interest during the last ten years [7]. This is partly due to advances in measurement technologies, e.g., in molecular biology, where high-throughput technologies allow simultaneous measurement of tens of thousands of variables.…”
Section: Methodsmentioning
confidence: 99%
“…In ordinary linear regression, there are a variety of classical techniques for variable selection such as forward selection, backward elimination and stepwise selection; see references [7,8] for a thorough review on these methods. To gauge the number of variables included in the model, Mallow's C p [9,10], Akaike's Information Criterion [11,12] and Schwartz's Bayesian Information Criterion [13] are widely used.…”
Section: Introductionmentioning
confidence: 99%