2017
DOI: 10.2139/ssrn.2986630
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Objective Functions Determined from Random Forests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(27 citation statements)
references
References 41 publications
0
27
0
Order By: Relevance
“…To our knowledge, Biggs et al (2017) is the only work in the PMO literature that has considered a notion of training relevance. The authors study a property investment problem and ensure similarity between solutions found via a PMO model and the historical data by constraining the search space to be within the convex hull of the features of historical observations.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…To our knowledge, Biggs et al (2017) is the only work in the PMO literature that has considered a notion of training relevance. The authors study a property investment problem and ensure similarity between solutions found via a PMO model and the historical data by constraining the search space to be within the convex hull of the features of historical observations.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Other application areas include food deliveryLiu et al (2020), scholarship allocationBergman et al (2019), personalized pricingBiggs et al (2021), and auctionsVerwer et al (2017).Optimization problems defined over trained predictive models are often challenging to solve, depending on the properties of the underlying predictive model Mišić (2020). andBiggs et al (2017) propose exact solution approaches for optimization problems defined over tree ensembles.Mixed integer programming (MIP) formulations for optimization models defined over trained neural networks are studied byCheng et al (2017),Fischetti and Jo (2018),Bunel et al (2018), Dutta et al (2018,Bergman et al (2019),Grimstad and Andersson (2019a),Schweidtmann and Mitsos (2019),Tjeng et al (2019),Grimstad and Andersson (2019b),Botoeva et al (2020), Anderson et al (2020 andTsay et al (2021). Ensembles of neural networks are studied byWang et al…”
mentioning
confidence: 99%
“…As embedding predictive models into optimization formulations is a critical aspect of this framework, previous literature has focused on predictive models that can be linearized and formulated as a mixed-integer linear programs (MILP), such as logistic regression, linear models, decision trees, random forests, and neural networks with Rectified Linear Unit (ReLU) activation functions (Bergman et al 2019, Verwer et al 2017, Biggs et al 2017, Mišić 2020. Some predictive models and their regularized versions lead to easier optimization problems and are chosen to eschew computational intractability (Liu et al 2020, Bertsimas et al 2016.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Current research focus has been on applying machine learning methodologies to predict the counterfactuals, based on which optimal decisions can be made. Local learning methods such as K-Nearest Neighbors (Altman, 1992), LOESS (LOcally Estimated Scatterplot Smoothing) (Cleveland and Devlin, 1988), CART (Classification And Regression Trees) (Breiman, 2017), and Random Forests (Breiman, 2001), have been studied inBertsimas and Kallus, 2019; Bertsimas et al, 2019a;Dunn, 2018;Biggs and Hariss, 2018. Extensions to continuous and multi-dimensional decision spaces with observational data were considered in McCord, 2018. To prevent overfitting, Bertsimas andVan Parys, 2017 proposed two robust prescriptive methods based on Nadaraya-Watson and nearest-neighbors learning.…”
Section: The Problem and Related Workmentioning
confidence: 99%