2022
DOI: 10.48550/arxiv.2205.12729
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep interpretable ensembles

Abstract: Ensembles improve prediction performance and allow uncertainty quantification by aggregating predictions from multiple models. In deep ensembling, the individual models are usually black box neural networks, or recently, partially interpretable semi-structured deep transformation models. However, interpretability of the ensemble members is generally lost upon aggregation. This is a crucial drawback of deep ensembles in high-stake decision fields, in which interpretable models are desired. We propose a novel tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 17 publications
0
1
0
Order By: Relevance
“…In each fold, we trained 5 randomly initialized versions of the model to consider uncertainty in model parameters and potentially improve prediction performance. 21 The predictions of the 5 folds were then averaged to 1 final prediction. We imputed missing values of clinical variables (Table 1) using missForest.…”
Section: Functional Outcome Prediction Modelsmentioning
confidence: 99%
“…In each fold, we trained 5 randomly initialized versions of the model to consider uncertainty in model parameters and potentially improve prediction performance. 21 The predictions of the 5 folds were then averaged to 1 final prediction. We imputed missing values of clinical variables (Table 1) using missForest.…”
Section: Functional Outcome Prediction Modelsmentioning
confidence: 99%
“…In the context of IDS, ensemble learning plays a crucial role in improving detection accuracy by combining the outputs of multiple classifiers [ 50 ]. However, the interpretability of individual ensemble members may be lost during the aggregation process, highlighting a trade-off between accuracy and explainability [ 51 ]. Moreover, the selection of appropriate ensemble learning models is essential to address the specific requirements of IDS, considering factors such as feature selection and model performance [ 52 ].…”
Section: Introductionmentioning
confidence: 99%