2022
DOI: 10.1007/s13218-022-00781-7
|View full text |Cite|
|
Sign up to set email alerts
|

Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs

Abstract: Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 41 publications
(23 reference statements)
0
2
0
Order By: Relevance
“…Knowledge-driven solutions based on logical reasoning over symbolic rules help overcome both of these limitations. By enforcing compliance of explanations with prior knowledge about the tasks, rulebased methods [108], [145] ensure that only sound associations between predictions and explanations are allowed. Also, logic rules are unambiguous and are easy to understand by humans.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%
“…Knowledge-driven solutions based on logical reasoning over symbolic rules help overcome both of these limitations. By enforcing compliance of explanations with prior knowledge about the tasks, rulebased methods [108], [145] ensure that only sound associations between predictions and explanations are allowed. Also, logic rules are unambiguous and are easy to understand by humans.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%
“…Each of them sheds light on a different aspect of the AI model’s computation and many times it has been shown that there is no mutual consent between them, leading to the so-called ‘disagreement’ problem ( Krishna et al., 2022 ). Currently, quality metrics for xAI methods ( Doumard et al., 2023 ; Schwalbe and Finzel, 2023 ) and benchmarks for its evaluation are being defined ( Agarwal et al., 2023 ) to motivate xAI research in directions that support trustworthy, reliable, actionable and causal explanations even if they don’t always align with human pre-conceived notions and expectations ( Holzinger et al., 2019 ; Magister et al., 2021 ; Finzel et al., 2022 ; Saranti et al., 2022 ; Cabitza et al., 2023 ; Holzinger et al., 2023c ).…”
Section: Accelerating Plant Breeding Processes With Explainable Aimentioning
confidence: 99%
“…CBMs like Concept-Bottleneck Models [18], Concept Whitening [27], and GlanceNets [19], among others [47,92,93] define a loss training penalty, for instance a cross-entropy loss, encouraging the extracted concepts to predict the annotations. Recently, these methods have been extended also to graph neural networks [94,95]. This solution seems straightforward: there is no more direct way than concept supervision to guide the model toward acquiring representations with the intended semantics.…”
Section: Supervised Strategiesmentioning
confidence: 99%