2016
DOI: 10.1016/j.jmva.2015.04.007
|View full text |Cite
|
Sign up to set email alerts
|

COBRA: A combined regression strategy

Abstract: A new method for combining several initial estimators of the regression function is introduced. Instead of building a linear or convex optimized combination over a collection of basic estimators r 1 ,..., r M , we use them as a collective indicator of the proximity between the training data and a test observation. This local distance approach is model-free and very fast. More specifically, the resulting nonparametric/nonlinear combined estimator is shown to perform asymptotically at least as well in the L 2 se… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
73
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(74 citation statements)
references
References 32 publications
1
73
0
Order By: Relevance
“…All variants of KernelCobra ship as part of the pycobra Python library introduced by Guedj and Srinivasa Desikan [2] (from version 0.2.4), and are designed to be used in a scikit-learn environment. We will conduct in future work a theoretical analysis of the kernelised COBRA algorithm to complete the theory provided by Biau et al [1]. Table 1.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…All variants of KernelCobra ship as part of the pycobra Python library introduced by Guedj and Srinivasa Desikan [2] (from version 0.2.4), and are designed to be used in a scikit-learn environment. We will conduct in future work a theoretical analysis of the kernelised COBRA algorithm to complete the theory provided by Biau et al [1]. Table 1.…”
Section: Discussionmentioning
confidence: 99%
“…We introduce KernelCobra, a non-linear learning strategy for combining an arbitrary number of initial predictors. KernelCobra builds on the COBRA algorithm introduced by Biau et al [1], which combined estimators based on a notion of proximity of predictions on the training data. While the COBRA algorithm used a binary threshold to declare which training data were close and to be used, we generalize this idea by using a kernel to better encapsulate the proximity information.…”
mentioning
confidence: 99%
“…Indeed, [3] demonstrated optimality for the crowd operating as a regression machine. And the single assumption guaranteeing this optimality in either case is that the known outcomes are bounded; see the discussion following Proposition 2.2 in [3].…”
Section: Discussionmentioning
confidence: 99%
“…However, given sufficient data, the method has been shown analytically to be at least as good as the best machine in the family; see [3]. For this reason the method is called here the optimal crowd machine .…”
Section: Introductionmentioning
confidence: 99%
“…The five following models cover a wide spectrum of regression and classification problems. Models 1-3 and 5 come from Biau et al (2016). Model 4 is a slight variation of a benchmark model in Hastie et al (2009).…”
Section: Description Of the Data Setsmentioning
confidence: 99%