2018
DOI: 10.1080/01621459.2017.1407775
|View full text |Cite
|
Sign up to set email alerts
|

Censoring Unbiased Regression Trees and Ensembles

Abstract: This paper proposes a novel paradigm for building regression trees and ensemble learning in survival analysis. Generalizations of the CART and Random Forests algorithms for general loss functions, and in the latter case more general bootstrap procedures, are both introduced. These results, in combination with an extension of the theory of censoring unbiased transformations applicable to loss functions, underpin the development of two new classes of algorithms for constructing survival trees and survival forest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
50
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(56 citation statements)
references
References 32 publications
(82 reference statements)
2
50
0
Order By: Relevance
“…Motivated by above results, we propose a new bias-correction procedure that actively selects the best splitting variable at each internal node without the influence of the censoring distribution. This establishes a con-nection with existing methodology developments such as [20] and [37] who convert censored observations to fully observed ones through the inverse probability of censoring weighting. However, our proposed splitting rule is much more general in the sense that it compares the distributions of failure time from the two potential child nodes, rather than focusing on the mean differences.…”
Section: Introductionmentioning
confidence: 72%
See 2 more Smart Citations
“…Motivated by above results, we propose a new bias-correction procedure that actively selects the best splitting variable at each internal node without the influence of the censoring distribution. This establishes a con-nection with existing methodology developments such as [20] and [37] who convert censored observations to fully observed ones through the inverse probability of censoring weighting. However, our proposed splitting rule is much more general in the sense that it compares the distributions of failure time from the two potential child nodes, rather than focusing on the mean differences.…”
Section: Introductionmentioning
confidence: 72%
“…as the subject-specific weight to fit regression random forests. One can also transform the censored observations into fully observed ones using, e.g., [34], and then fit a regression model with the complete data [29,36,37]. Similar ideas have also been used for imputing censored outcomes [42] when learning an optimal treatment strategy [14].…”
Section: Consistency Under Adaptive Splitting Rules29mentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, two papers introduced novel approaches to constructing ensemble methods for survival data. Steingrimsson et al (2018) proposed censoring unbiased regression survival trees and ensembles by extending the theory of censoring unbiased transformations applicable to loss functions for right-censored survival data. This new class of ensemble algorithms extends the random survival forest algorithm for use with an arbitrary loss function and allows the use of more general bootstrap procedures, such as the exchangeably weighted bootstrap (Weng, 1989).…”
Section: Other Ensemble Resampling Methodsmentioning
confidence: 99%
“…For example, Davis and Anderson (1989) assumed that the survival time within any given node follows the exponential distribution; LeBlanc and Crowley (1992) adopted a proportional hazards model with an un-specified baseline hazard. More recently, the squared error loss commonly used in regression trees are extended to handle censored data (Molinaro et al, 2004;Steingrimsson et al, 2016Steingrimsson et al, , 2018. Other possible splitting criteria include a weighted sum of impurity of the censoring indicator and the squared error loss of the observed event time (Zhang, 1995), and the Harrell's C-statistic (Schmid et al, 2016).…”
Section: Introductionmentioning
confidence: 99%