“…The notion of modeling relations as operations in some vector space was further extended by Wang et [19]. Over the time even more complex neural architectures were employed, e.g., capsule networks in CapsE by Nguyen et al [63], recurrent skipping networks by Guo et al [33], graph convolutional networks in GCN-Align by Wang et al [93] or R-GCN by Schlichtkrull et al [82], recurrent transformers by Werner et al [95] Yet another approach was taken in RESCAL by Nickel et al where tensor-based techniques were used [65]. Similar approaches were taken in TOAST by Jachnik et al [39], TATEC by García-Durán et al [28], DistMult by Yang et al [97], HolE by Nickel et al [64], ComplEx by Trouillon et al [88], or ANALOGY by Liu et al [52].…”
We present a novel approach for learning embeddings of ALC knowledge base concepts. The embeddings reflect the semantics of the concepts in such a way that it is possible to compute an embedding of a complex concept from the embeddings of its parts by using appropriate neural constructors. Embeddings for different knowledge bases are vectors in a shared vector space, shaped in such a way that approximate subsumption checking for arbitrarily complex concepts can be done by the same neural network, called a reasoner head, for all the knowledge bases. To underline this unique property of enabling reasoning directly on embeddings, we call them reason-able embeddings. We report the results of experimental evaluation showing that the difference in reasoning performance between training a separate reasoner head for each ontology and using a shared reasoner head, is negligible.
“…The notion of modeling relations as operations in some vector space was further extended by Wang et [19]. Over the time even more complex neural architectures were employed, e.g., capsule networks in CapsE by Nguyen et al [63], recurrent skipping networks by Guo et al [33], graph convolutional networks in GCN-Align by Wang et al [93] or R-GCN by Schlichtkrull et al [82], recurrent transformers by Werner et al [95] Yet another approach was taken in RESCAL by Nickel et al where tensor-based techniques were used [65]. Similar approaches were taken in TOAST by Jachnik et al [39], TATEC by García-Durán et al [28], DistMult by Yang et al [97], HolE by Nickel et al [64], ComplEx by Trouillon et al [88], or ANALOGY by Liu et al [52].…”
We present a novel approach for learning embeddings of ALC knowledge base concepts. The embeddings reflect the semantics of the concepts in such a way that it is possible to compute an embedding of a complex concept from the embeddings of its parts by using appropriate neural constructors. Embeddings for different knowledge bases are vectors in a shared vector space, shaped in such a way that approximate subsumption checking for arbitrarily complex concepts can be done by the same neural network, called a reasoner head, for all the knowledge bases. To underline this unique property of enabling reasoning directly on embeddings, we call them reason-able embeddings. We report the results of experimental evaluation showing that the difference in reasoning performance between training a separate reasoner head for each ontology and using a shared reasoner head, is negligible.
“…Contextual Knowledge Graph Embeddings Whereas our approach extracts the contextual views in a previous step before the actual knowledge graph embedding, there exist works that create contextualized KG embeddings based on the full KG. Werner et al [50] introduced a KG embedding over temporal contextualized KG facts. Their recurrent transformer enables to transform global KGEs into contextual embeddings, given the situation-specific factors of the relation and the subjective history of the entity.…”
Current deep learning methods for object recognition are purely data-driven and require a large number of training samples to achieve good results. Due to their sole dependence on image data, these methods tend to fail when confronted with new environments where even small deviations occur. Human perception, however, has proven to be significantly more robust to such distribution shifts. It is assumed that their ability to deal with unknown scenarios is based on extensive incorporation of contextual knowledge. Context can be based either on object co-occurrences in a scene or on memory of experience. In accordance with the human visual cortex which uses context to form different object representations for a seen image, we propose an approach that enhances deep learning methods by using external contextual knowledge encoded in a knowledge graph. Therefore, we extract different contextual views from a generic knowledge graph, transform the views into vector space and infuse it into a DNN. We conduct a series of experiments to investigate the impact of different contextual views on the learned object representations for the same image dataset. The experimental results provide evidence that the contextual views influence the image representations in the DNN differently and therefore lead to different predictions for the same images. We also show that context helps to strengthen the robustness of object recognition models for out-of-distribution images, usually occurring in transfer learning tasks or real-world scenarios.
“…Similar approaches representing the context in a driving scenario are shown in [24,26,38,74]. Ontologies have also been used for context-dependent recommendation tasks [108,40].…”
Automated driving is one of the most active research areas in computer science. Deep learning methods have made remarkable breakthroughs in machine learning in general and in automated driving (AD) in particular. However, there are still unsolved problems to guarantee reliability and safety of automated systems, especially to effectively incorporate all available information and knowledge in the driving task. Knowledge graphs (KG) have recently gained significant attention from both industry and academia for applications that benefit by exploiting structured, dynamic, and relational data. The complexity of graph-structured data with complex relationships and inter-dependencies between objects has posed significant challenges to existing machine learning algorithms. However, recent progress in knowledge graph embeddings and graph neural networks allows to applying machine learning to graph-structured data. Therefore, we motivate and discuss the potential benefit of KGs applied to the main tasks of AD including 1) ontologies 2) perception, 3) scene understanding, 4) motion planning, and 5) validation. Then, we survey, analyze and categorize ontologies and KG-based approaches for AD. We discuss current research challenges and propose promising future research directions for KG-based solutions for AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.