2018
DOI: 10.1111/rssb.12293
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Regression Tree Ensembles that Adapt to Smoothness and Sparsity

Abstract: Summary Ensembles of decision trees are a useful tool for obtaining flexible estimates of regression functions. Examples of these methods include gradient‐boosted decision trees, random forests and Bayesian classification and regression trees. Two potential shortcomings of tree ensembles are their lack of smoothness and their vulnerability to the curse of dimensionality. We show that these issues can be overcome by instead considering sparsity inducing soft decision trees in which the decisions are treated as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
143
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 96 publications
(145 citation statements)
references
References 37 publications
2
143
0
Order By: Relevance
“…In our approach, by contrast, we smooth over just one target covariate. This avoids the high computational cost associated with the method of Linero and Yang (2017). Moreover, it avoids the somewhat inflexible interpolation and extrapolation behavior associated with their approach to global smoothing.…”
Section: Connection With Existing Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In our approach, by contrast, we smooth over just one target covariate. This avoids the high computational cost associated with the method of Linero and Yang (2017). Moreover, it avoids the somewhat inflexible interpolation and extrapolation behavior associated with their approach to global smoothing.…”
Section: Connection With Existing Workmentioning
confidence: 99%
“…BART has been successful in a variety of contexts including prediction and classification (Chipman et al, 2010;Murray, 2017;Linero and Yang, 2017;Linero, 2018;Hernández et al, 2018), survival analysis (Sparapani et al, 2016), and causal inference (Hill, 2011;Hahn et al, 2017;Logan et al, 2017;Sivaganesan et al, 2017).…”
Section: The Bart Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…There have been recent developments on posterior consistency and rates of posterior concentration for Bayesian tree models in prediction contexts (Linero and Yang, 2017;Rockova and van der Pas, 2017). These results require significant modifications to the BART prior, however, which we do not further investigate here.…”
Section: Bayesian Additive Regression Trees For Heterogeneous Treatmementioning
confidence: 99%