2022
DOI: 10.1016/j.ijmedinf.2022.104896
|View full text |Cite
|
Sign up to set email alerts
|

An interpretable machine learning prognostic system for risk stratification in oropharyngeal cancer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 62 publications
0
11
0
Order By: Relevance
“…This interpretability can be particularly valuable in guiding treatment decisions, such as when considering alternative treatment modalities like radiotherapy (external/brachy), where surgery carries a higher risk of side effects On the other hand, RF and SVM models do not provide a straightforward way to visualize how the features impact the predicted outcome, which makes them less interpretable. Improving model interpretability was not the primary objective of our study; therefore we did not explore methods such as partial dependence plots or surrogate models to enhance interpretability ( 35 ).…”
Section: Discussionmentioning
confidence: 99%
“…This interpretability can be particularly valuable in guiding treatment decisions, such as when considering alternative treatment modalities like radiotherapy (external/brachy), where surgery carries a higher risk of side effects On the other hand, RF and SVM models do not provide a straightforward way to visualize how the features impact the predicted outcome, which makes them less interpretable. Improving model interpretability was not the primary objective of our study; therefore we did not explore methods such as partial dependence plots or surrogate models to enhance interpretability ( 35 ).…”
Section: Discussionmentioning
confidence: 99%
“…Considering the escalating global incidence of CRC, there is an urgent need for tools capable of quantifying the risk of disease progression, ultimately enhancing overall patient outcomes. A significant clinical challenge lies in accurately determining the risk of CRC‐related LMs and conducting timely imaging screening 21 . Numerous studies employing ML and AI technology 22 have contributed to the improved prognosis of CRC patients with remarkable results.…”
Section: Discussionmentioning
confidence: 99%
“…It is based on Shapley values, a game‐theoretic concept developed by economist Lloyd Shapley to determine the importance of individuals by calculating their contributions to cooperation. This method has received much attention in AI interpretability research and has contributed significantly to advancing the clinical applications of models 21,22 . The Shapley value interpretation is an additive feature attribution method that interprets a model's predicted value as a linear function of a binary variable.g)(zgoodbreak=ϕ0goodbreak+j=1Mϕjzjz01MϕjRwhere g is the explanatory model (3a), z is the coalition vector, M is the maximum coalition size (3b), and ϕ j ∈ R is the feature attribution of feature j .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…LIME, the acronym for local interpretable model-agnostic explanations 26 , is a model agnostic technique that is applied to an already trained model to investigate and analyze the relationship between the input parameters and output represented by the model 27 . It is a local model interpretability technique that works by tweaking the input parameters while observing the effect of this tweak on the output 28 . The significance of the tweaking helps to understand the degree of accuracy of the prediction made by the model and to investigate which of the input variables caused the prediction of a data sample.…”
Section: Methodsmentioning
confidence: 99%