2020
DOI: 10.3233/sw-190374
|View full text |Cite
|
Sign up to set email alerts
|

On the role of knowledge graphs in explainable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 96 publications
(46 citation statements)
references
References 40 publications
0
46
0
Order By: Relevance
“…Similarly, plenty of implementation problems arise out of looking closely at the algorithm's different features in order to achieve real interpretability and explanation. Knowledge Graphs have been designed to capture knowledge from heterogeneous domains, making them a great candidate to achieve explanation in deep learning systems [124].…”
Section: And Explainability: a Machine Learning Zoo Mini-tour And Explainable Ai A Review Of Machine Learning Interpretability Methods Pamentioning
confidence: 99%
“…Similarly, plenty of implementation problems arise out of looking closely at the algorithm's different features in order to achieve real interpretability and explanation. Knowledge Graphs have been designed to capture knowledge from heterogeneous domains, making them a great candidate to achieve explanation in deep learning systems [124].…”
Section: And Explainability: a Machine Learning Zoo Mini-tour And Explainable Ai A Review Of Machine Learning Interpretability Methods Pamentioning
confidence: 99%
“…In particular, for opaque machine learning processes such as neural networks and genetic algorithms, KGs can help document the provenance of the workflow and improve the interpretability of results. A key feature of KGs is their capability of defining groups or clusters and their associated attributes, which can be leveraged to add a semantic layer to many machine learning algorithms (Lecue, 2020). For example, by explicating typical attributes of instances in a subgroup, KGs can explain the grouping process in a machine learning process and demonstrate the meaning of results (Ristoski and Paulheim, 2016).…”
Section: Intelligent Geosciences Underpinned By Knowledge Graphsmentioning
confidence: 99%
“…Graph neural networks (GNN) are deep learning-based models working on the graph domain [10, 25, 26]. Their properties have been recently drawn the attention of the artificial intelligence research community given their high interpretability and as the only non-Euclidean models available in machine learning [27, 13]. The combination of graph theory and neural network elements have made GNNs one of the most promising tools to analyse complex systems in the graph domain.…”
Section: Design Of a Biomedical Digital Twinmentioning
confidence: 99%