2022
DOI: 10.3390/make4040047
|View full text |Cite
|
Sign up to set email alerts
|

Actionable Explainable AI (AxAI): A Practical Example with Aggregation Functions for Adaptive Classification and Textual Explanations for Interpretable Machine Learning

Abstract: In many domains of our daily life (e.g., agriculture, forestry, health, etc.), both laymen and experts need to classify entities into two binary classes (yes/no, good/bad, sufficient/insufficient, benign/malign, etc.). For many entities, this decision is difficult and we need another class called “maybe”, which contains a corresponding quantifiable tendency toward one of these two opposites. Human domain experts are often able to mark any entity, place it in a different class and adjust the position of the slo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 55 publications
(61 reference statements)
0
7
0
Order By: Relevance
“…Future research should also compare the effectiveness of the SHAP technique employed here with other explainable AI tools, such as LIME 61 , Deep-Lift 62 or LRP 63 , as well as interpretable Transformer techniques 85 , 86 . Future work could also explore the possibility of using explainable-AI to understand decision making in contexts where indecision often occurs 87 , as well as whether an explainable-AI analysis of the input–output mappings that underlie miss-classification or incorrect decision predictions could be used to understand ineffective decision-making.…”
Section: Discussionmentioning
confidence: 99%
“…Future research should also compare the effectiveness of the SHAP technique employed here with other explainable AI tools, such as LIME 61 , Deep-Lift 62 or LRP 63 , as well as interpretable Transformer techniques 85 , 86 . Future work could also explore the possibility of using explainable-AI to understand decision making in contexts where indecision often occurs 87 , as well as whether an explainable-AI analysis of the input–output mappings that underlie miss-classification or incorrect decision predictions could be used to understand ineffective decision-making.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, we plan to test more Explainable AI methods to provide deeper insight into our model performance [ 61 , 62 , 63 ], e.g., Layer-wise Relevance Propagation that captures both negative and positive relevance. Finally, actionable and explainable AI extensions based on our GCECDL would be an exciting research line [ 64 ].…”
Section: Discussionmentioning
confidence: 99%
“…Each of them sheds light on a different aspect of the AI model’s computation and many times it has been shown that there is no mutual consent between them, leading to the so-called ‘disagreement’ problem ( Krishna et al., 2022 ). Currently, quality metrics for xAI methods ( Doumard et al., 2023 ; Schwalbe and Finzel, 2023 ) and benchmarks for its evaluation are being defined ( Agarwal et al., 2023 ) to motivate xAI research in directions that support trustworthy, reliable, actionable and causal explanations even if they don’t always align with human pre-conceived notions and expectations ( Holzinger et al., 2019 ; Magister et al., 2021 ; Finzel et al., 2022 ; Saranti et al., 2022 ; Cabitza et al., 2023 ; Holzinger et al., 2023c ).…”
Section: Accelerating Plant Breeding Processes With Explainable Aimentioning
confidence: 99%