2020
DOI: 10.48550/arxiv.2010.08660
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

Abstract: The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 15 publications
(21 reference statements)
0
3
0
Order By: Relevance
“…Wang et al [45] augment the BERT model with a structured prediction layer to predict multiple relations in one pass. In all the approaches discussed so far, knowledge has not been a component of the architecture [46].…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al [45] augment the BERT model with a structured prediction layer to predict multiple relations in one pass. In all the approaches discussed so far, knowledge has not been a component of the architecture [46].…”
Section: Related Workmentioning
confidence: 99%
“…Doctor XAI (Panigutti et al (2020)) develops an agnostic XAI technique for ontology-linked data classification by training a surrogate model and extracting rules from it. Gaur et al (2020) make use of a Knowledge Graph to feed deep learning models with it to enhance their explainability. Samek and Müller (2019) envision Knowledge Graphs can be used to compact large tree models by combining nodes into unique probabilistic concepts.…”
Section: Xai: Black-box Models and Semantic Technologiesmentioning
confidence: 99%
“…Such explanations can be provided on a global level (the forecasting model in general) or at a local level (for every prediction instance). Some authors also envisioned that the inclusion of semantic technologies could be used to determine the semantic closeness of concepts encoded in data features or integrate knowledge graph embeddings to a forecasting model to produce explanations along with the forecasts (Gaur et al (2020); Panigutti et al (2020)).…”
Section: Introductionmentioning
confidence: 99%