2023
DOI: 10.1016/j.is.2022.102162
|View full text |Cite
|
Sign up to set email alerts
|

A quantitative approach for the comparison of additive local explanation methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Comparing R 2 and MAE on test dataset, XGBoost and MLP performed the best, with similar performances and the lowest standard deviations during cross‐validation for XGBoost (Figure 2a,b ; Figure S6 ). Given the high number of variables (high dimensionality) and number of subjects in the database, XGBoost was selected for its abilities to efficiently compute explanations (Doumard et al, 2023 ). The differential error of the model by age, predicting young individuals being older, or the opposite, was greatly minimized using the custom objective function during XGBoost training (Figure 2b ), with no significant impact on the global performance (0.72 and 8.1 on the test dataset for R 2 and MAE, respectively, Figure 2B ).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Comparing R 2 and MAE on test dataset, XGBoost and MLP performed the best, with similar performances and the lowest standard deviations during cross‐validation for XGBoost (Figure 2a,b ; Figure S6 ). Given the high number of variables (high dimensionality) and number of subjects in the database, XGBoost was selected for its abilities to efficiently compute explanations (Doumard et al, 2023 ). The differential error of the model by age, predicting young individuals being older, or the opposite, was greatly minimized using the custom objective function during XGBoost training (Figure 2b ), with no significant impact on the global performance (0.72 and 8.1 on the test dataset for R 2 and MAE, respectively, Figure 2B ).…”
Section: Resultsmentioning
confidence: 99%
“…To define the contribution of each variable in individual PPA prediction, the Shapley Additive exPlanations (SHAP) Tree framework was applied on the XGBoost model with Custom Loss model (Doumard et al, 2023 ). The SHAP value integrates both the effect per se of a given biological variable and the effects of this variable in interaction with other biological parameters.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each of them sheds light on a different aspect of the AI model’s computation and many times it has been shown that there is no mutual consent between them, leading to the so-called ‘disagreement’ problem ( Krishna et al., 2022 ). Currently, quality metrics for xAI methods ( Doumard et al., 2023 ; Schwalbe and Finzel, 2023 ) and benchmarks for its evaluation are being defined ( Agarwal et al., 2023 ) to motivate xAI research in directions that support trustworthy, reliable, actionable and causal explanations even if they don’t always align with human pre-conceived notions and expectations ( Holzinger et al., 2019 ; Magister et al., 2021 ; Finzel et al., 2022 ; Saranti et al., 2022 ; Cabitza et al., 2023 ; Holzinger et al., 2023c ).…”
Section: Accelerating Plant Breeding Processes With Explainable Aimentioning
confidence: 99%
“…As an additive method, it assigns an influence value to each feature of each instance, which represents its contribution to the prediction. Moreover, an advantage of LIME over other additive methods such as KernelSHAP and coalitional-based methods is less computational complexity when the number of features increases [74], which is critical for the feasibility of this study.…”
Section: A Experimental Workflowmentioning
confidence: 99%