2021
DOI: 10.1007/978-3-030-67664-3_12
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Learning for Prediction and Optimization with Gradient Boosting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Subsequent approaches applied implicit differentiation to the KKT optimality conditions of constrained problems directly , but only on special problem classes such as Quadratic Programs. Konishi and Fukunaga [2021] extend the method of , by modeling second-order derivatives of the optimization for training with gradient boosting methods. Donti et al [2017] uses the differentiable quadratic programming solver of to approximately differentiate general convex programs through quadratic surrogate problems.…”
Section: A Related Workmentioning
confidence: 99%
“…Subsequent approaches applied implicit differentiation to the KKT optimality conditions of constrained problems directly , but only on special problem classes such as Quadratic Programs. Konishi and Fukunaga [2021] extend the method of , by modeling second-order derivatives of the optimization for training with gradient boosting methods. Donti et al [2017] uses the differentiable quadratic programming solver of to approximately differentiate general convex programs through quadratic surrogate problems.…”
Section: A Related Workmentioning
confidence: 99%
“…The authors also consider a random forest [10] implementation, but do not consider a gradient boosting ensemble approach. Konishi and Fukunaga [27] consider gradient boosting problems under the SPO framework for a subset of optimization problems; namely with linear inequality constraints, but do not consider more general convex optimization problems. To our knowledge, dboost is the first 'smart' gradient boosting implementation that can support any convex optimization program that can be cast as convex quadratic cone program.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper we present dboost, a general purpose framework that combines the strength of gradient boosting ensembles with the SPO framework. Previous work [27] considers gradient boosting for integrated prediction and optimization problems but only considers a small subset of optimization problems with linear inequality constraints. In contrast, the dboost framework is capable of supporting any optimization problem that can be cast as a convex quadratic cone program (CQP); and thus supports linear, quadratic and second-order cone programming with general convex cone constraints.…”
Section: Introductionmentioning
confidence: 99%