2020
DOI: 10.3390/info11020122
|View full text |Cite
|
Sign up to set email alerts
|

On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research

Abstract: Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(33 citation statements)
references
References 28 publications
0
33
0
Order By: Relevance
“…Applying this technique would then lead to higher costs for the overall FMEA deployment through training and dedicating a specialised team. As indicated in Futia and Vetrò [45], there are also two main challenges for adopting KBS techniques: knowledge matching and explanations from different sources. These challenges would be exacerbated in the practice of FMEA, where there are already difficulties in its applications for identifying inputs include failures modes and other features from different sources and backgrounds as well as the gap in knowledge between Functional, Design and Process FMEAs.…”
Section: Discussionmentioning
confidence: 99%
“…Applying this technique would then lead to higher costs for the overall FMEA deployment through training and dedicating a specialised team. As indicated in Futia and Vetrò [45], there are also two main challenges for adopting KBS techniques: knowledge matching and explanations from different sources. These challenges would be exacerbated in the practice of FMEA, where there are already difficulties in its applications for identifying inputs include failures modes and other features from different sources and backgrounds as well as the gap in knowledge between Functional, Design and Process FMEAs.…”
Section: Discussionmentioning
confidence: 99%
“…[216], [203], [53] , [204], [54] Graph-based Explanations are generated through knowledge graphs and scene graphs to make the models more interpretable. [197], [182], [96] , [196] Interactive approach Virtual agents' assistance, algorithms for an explanation of the systems' internal state, and feedback loop to rectify in case of a wrong prediction are widely used.…”
Section: E Attribute Based Methodsmentioning
confidence: 99%
“…A "Multimodal Knowledge-aware Hierarchical Attention Network "in which a knowledge graph with multiple modalities and different features is built for the medical field. In [204], a comprehensive view on the neuro symbolic AI perceptive is provided and integration of VOLUME XX, 2017 knowledge graphs in deep learning models for model interpretability is proposed.…”
Section: ) Explaination Using Scene Graphsmentioning
confidence: 99%
“…Sub-research areas such as these have a number of researchers creating different research avenues [ 11 , 12 , 15 , 17 , 20 , 26 , 28 , 32 , 38 , 39 , 40 , 41 ]. For instance, there are works on developing algorithms and novel DL architectures in XAI to add explainability to the models [ 42 , 43 , 44 , 45 , 46 ]. In comparison, there is also work that considers user experience and user requirements for XAI [ 7 , 8 , 9 , 10 , 47 ], and evaluates algorithms and models with user studies [ 48 ].…”
Section: Classifying Hcml Researchmentioning
confidence: 99%