2018
DOI: 10.1002/mp.12967
|View full text |Cite|
|
Sign up to set email alerts
|

Machine learning algorithms for outcome prediction in (chemo)radiotherapy: An empirical comparison of classifiers

Abstract: Random forest and elastic net logistic regression yield higher discriminative performance in (chemo)radiotherapy outcome and toxicity prediction than other studied classifiers. Thus, one of these two classifiers should be the first choice for investigators when building classification models or to benchmark one's own modeling results against. Our results also show that an informed preselection of classifiers based on existing datasets can improve discrimination over random selection.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
158
1
3

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 234 publications
(167 citation statements)
references
References 25 publications
(16 reference statements)
5
158
1
3
Order By: Relevance
“…A number of machine learning (ML) algorithms can provide robust means to identify a subset of features to combine into a multiparametric model (24). Although several ML algorithms, alone or in combination, have been used in radiomics analysis for feature selection and classification, there is no "one fits all" approach as performance of various ML workflows have been shown to depend on application and/or type of data (25)(26)(27) Previous studies have tested cross-combination of different ML approaches, and have suggested distinct ML algorithms that depict high performance for feature selection and classification (24,25,28).…”
Section: Introductionmentioning
confidence: 99%
“…A number of machine learning (ML) algorithms can provide robust means to identify a subset of features to combine into a multiparametric model (24). Although several ML algorithms, alone or in combination, have been used in radiomics analysis for feature selection and classification, there is no "one fits all" approach as performance of various ML workflows have been shown to depend on application and/or type of data (25)(26)(27) Previous studies have tested cross-combination of different ML approaches, and have suggested distinct ML algorithms that depict high performance for feature selection and classification (24,25,28).…”
Section: Introductionmentioning
confidence: 99%
“…When looking at the AUC metric alone our results were comparable to values published in other SML based EBRT prediction studies. 13,17,18,27,28 Direct comparison of models is limited, however, because most other studies used toxicity as an endpoint rather than dose tolerance. The exception being Caine et al who also investigated protocol compliance but did not provide AUC or other performance metrics.…”
Section: Discussionmentioning
confidence: 99%
“…13 In a comparison of SML methods across a large number of datasets from different departments the work by Deist and colleagues further highlights the limitations of comparing models from different datasets as it can lead to variations in AUC performance and model superiority. 27 Beyond the standard model performance metrics, altering the probability threshold for predictive classification of outcomes was also investigated. This was in response to the knowledge that many clinical intervention decisions are not made on a 50% risk probability threshold.…”
Section: Discussionmentioning
confidence: 99%
“…For example, individual-level treatment response prediction has been studied for schizophrenia [5] and depression [33]. An empirical comparison of classifiers for treatment-response prediction for chemoradiotherapy appears in [9]. Topics studied in social sciences include the effect of a discrete treatment, years of education, on an individuals' income [6], and allowing a response to depend on social interactions and treatments for other individuals [24].…”
Section: Related Workmentioning
confidence: 99%