2023
DOI: 10.1016/j.cmpb.2023.107482
|View full text |Cite
|
Sign up to set email alerts
|

Development of prediction models for one-year brain tumour survival using machine learning: a comparison of accuracy and interpretability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 40 publications
0
1
0
Order By: Relevance
“…Integration strategies can align ML models with existing medical knowledge and clinical practices, a critical aspect in fostering trust among physicians who rely on the efficacy of these long-standing guidelines. This alignment is also crucial in ensuring continuity of care and promoting the adoption of such models in clinical settings [ 47 ]. In the presented case study, integrations in learning and output evaluation phases attained near-complete or complete adherence to the guidelines while improving performance, thereby enhancing the model’s potential for clinical adoption.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Integration strategies can align ML models with existing medical knowledge and clinical practices, a critical aspect in fostering trust among physicians who rely on the efficacy of these long-standing guidelines. This alignment is also crucial in ensuring continuity of care and promoting the adoption of such models in clinical settings [ 47 ]. In the presented case study, integrations in learning and output evaluation phases attained near-complete or complete adherence to the guidelines while improving performance, thereby enhancing the model’s potential for clinical adoption.…”
Section: Discussionmentioning
confidence: 99%
“…EHRs also provide structured data, including patient demographics, laboratory tests, and medications, which can be challenging to analyse due to their high dimensionality, temporality, sparsity, irregularity and bias. To address these challenges, recent integrations have leveraged expert defined thresholds to discretise continuous variables into meaningful intervals [ 47 ], as well as previous literature [ 48 ] and existing expert models [ 49 ] to inform feature selection. Other applications have generated concise sets of meaningful summary features from expert-defined rules [ 50 ] and enriched EHR representation through the integration of knowledge graphs [ 51 ] and hierarchical code classifications [ 52 ].…”
Section: Previous Workmentioning
confidence: 99%
“…Improving the interpretability of the model is another important direction for the development of ML in the differential diagnosis of LTBI [ 291 , 292 ]. Traditionally, ML algorithms often use black-box models to balance interpretability and predictiveness [ 293 ].…”
Section: Future Directions Of ML For the Differential Diagnosis Of Ltbimentioning
confidence: 99%