2022
DOI: 10.1108/bij-02-2022-0112
|View full text |Cite
|
Sign up to set email alerts
|

When to choose ranked area integrals versus integrated gradient for explainable artificial intelligence – a comparison of algorithms

Abstract: PurposeExplainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.Design/methodology/approachIn this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.FindingsThe theoretical part includes the comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 77 publications
0
2
0
Order By: Relevance
“…Term attributions is common in model interpretability and multiple attribution algorithms are associated with it. Algorithms can rely on different principles to quantify attributions such as gradients 23,46,47 or perturbations [48][49][50] . For our study we selected integrated gradient attribution method 23 that uses the input's gradients after back-propagation and does not require modification of the original network.…”
Section: Instance Level Explanationsmentioning
confidence: 99%
“…Term attributions is common in model interpretability and multiple attribution algorithms are associated with it. Algorithms can rely on different principles to quantify attributions such as gradients 23,46,47 or perturbations [48][49][50] . For our study we selected integrated gradient attribution method 23 that uses the input's gradients after back-propagation and does not require modification of the original network.…”
Section: Instance Level Explanationsmentioning
confidence: 99%
“…Term attributions is common in model interpretability and multiple attribution algorithms are associated with it. Algorithms can rely on different principles to quantify attributions such as gradients 33,35,36 or perturbations [37][38][39] . For our study we selected integrated gradient attribution method 33 that uses the input's gradients after back-propagation and does not require modification of the original network.…”
Section: Instance Level Explanationsmentioning
confidence: 99%