2016
DOI: 10.1007/s10589-016-9832-2
|View full text |Cite
|
Sign up to set email alerts
|

Feature subset selection for logistic regression via mixed integer optimization

Abstract: This paper concerns a method of selecting a subset of features for a logistic regression model. Information criteria, such as the Akaike information criterion and Bayesian information criterion, are employed as a goodness-offit measure. The feature subset selection problem is formulated as a mixed integer linear optimization problem, which can be solved with standard mathematical optimization software, by using a piecewise linear approximation. Computational experiments show that, in terms of solution quality,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
54
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9
1

Relationship

4
6

Authors

Journals

citations
Cited by 48 publications
(54 citation statements)
references
References 30 publications
0
54
0
Order By: Relevance
“…Optimization problems where sparse solutions are sought arise frequently in modern science and engineering. Just as examples, applications of sparse optimization regard compressed sensing in signal processing [1,2], best subset selection [3][4][5][6] and sparse inverse covariance estimation [7,8] in statistics, sparse portfolio selection [9] in decision science, neural networks compression in machine learning [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…Optimization problems where sparse solutions are sought arise frequently in modern science and engineering. Just as examples, applications of sparse optimization regard compressed sensing in signal processing [1,2], best subset selection [3][4][5][6] and sparse inverse covariance estimation [7,8] in statistics, sparse portfolio selection [9] in decision science, neural networks compression in machine learning [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…Since proper variable selection is essential for data analysis, it has a great advantage over heuristic methods. A direction of future research will be to extend our formulation to classification algorithms (e.g., Sato, Takano, Miyashiro, & Yoshise, 2015).…”
Section: Resultsmentioning
confidence: 99%
“…One of these approaches was first proposed in the 1970s [3], and recently they have received renewed attention due to advances in algorithms and hardware [8], [16]. The MIO approaches have recently been used effectively for linear regression [14], [19], [20], logistic regression [7], [23], support vector machine [18], classification tree [5], and various applications [29], [30].…”
Section: Introductionmentioning
confidence: 99%