2019
DOI: 10.1259/bjro.20190021
|View full text |Cite
|
Sign up to set email alerts
|

Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling

Abstract: Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes’ prediction, but also needs to be made based on an informed understanding of the relationship among patients’ characteristics, radiation response and treatment plans. As more patients’ biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
40
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 59 publications
(45 citation statements)
references
References 61 publications
0
40
0
1
Order By: Relevance
“…Accuracy in this clinical context refers to the ability of the ML and DL algorithms to perform the tasks they are assigned either as well, or better than what a human could accomplish. Interpretability refers to the ability of the clinician to confidently understand and interpret the results of an algorithm, without necessarily having to understand the minute details of its mechanics 107 . This concept of interpretability is particularly important in a clinical setting because it can help act as a fail‐safe against instances in which algorithms may produce results that are flawed due to inherent bias in the training data or other unforeseen bugs.…”
Section: Are You the Human Still Relevant?mentioning
confidence: 99%
See 1 more Smart Citation
“…Accuracy in this clinical context refers to the ability of the ML and DL algorithms to perform the tasks they are assigned either as well, or better than what a human could accomplish. Interpretability refers to the ability of the clinician to confidently understand and interpret the results of an algorithm, without necessarily having to understand the minute details of its mechanics 107 . This concept of interpretability is particularly important in a clinical setting because it can help act as a fail‐safe against instances in which algorithms may produce results that are flawed due to inherent bias in the training data or other unforeseen bugs.…”
Section: Are You the Human Still Relevant?mentioning
confidence: 99%
“…Algorithms which are both accurate and interpretable are able to gain a clinician’s trust as clinical tools because they can be expected to perform their function correctly most of the time and do not require the clinician to blindly accept their results. Existing ML and DL techniques are recognized to suffer from a tradeoff between accuracy and interpretability, and therefore more work is necessary to develop ML/DL methods which can achieve a better balance between these two qualities, such as the use of gradient maps or proxy models with DNN or human in the loop with ML approaches as discussed below 107 …”
Section: Are You the Human Still Relevant?mentioning
confidence: 99%
“…The poor performance of decision-trees is most likely due to overfitting during model training (Supplementary Figure S1). However, the fact that the RMSE of the interpretable models (e.g., tree-based models) was generally higher than the RMSE of blackbox approaches (i.e., less interpretable models like SVM and ensemble algorithms) (Table 1; Figure 2), is illustrative of the trade-off between model interpretability and model performance [see (Meinshausen, 2010;Kuhn and Johnson, 2016;Doshi-Velez and Kim, 2017;Luo et al, 2019;Weller et al, 2020a) for more on these trade-offs]. Thus, our findings highlight the importance of weighing the need for interpretability vs. predictive accuracy before model fitting, particularly in future studies focused on developing implementable, field-ready models that growers can use for managing food safety hazards in agricultural water.…”
Section: Trade-offs Between Interpretability and Accuracymentioning
confidence: 99%
“…Many of the reported research in those fields insist on the necessity of interpretable systems. In [8], it is stated that the clinical decision related to radiation treatment must not be based only on the accuracy of the prediction system but also on an informed understanding of the relationship among patients' characteristics, radiation response and treatment plans. Additionally, there is the challenge of applying ANN in predicting medical outcomes when compared with the use of logistic regression for the same problem [9].…”
Section: Introductionmentioning
confidence: 99%