2016
DOI: 10.1016/j.websem.2016.05.001
|View full text |Cite
|
Sign up to set email alerts
|

LHD 2.0: A text mining approach to typing entities in knowledge graphs

Abstract: The type of the entity being described is one of the key pieces of information in linked data knowledge graphs. In this article, we introduce a novel technique for type inference that extracts types from the free text description of the entity combining lexico-syntactic pattern analysis with supervised classification. For lexicosyntactic (Hearst) pattern-based extraction we use our previously published Linked Hypernyms Dataset Framework. Its output is mapped to the DBpedia Ontology with exact string matching c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 21 publications
0
11
0
Order By: Relevance
“…In other words, the ontological and semantic rules are grounded by substituting literals into formulas. Simply put, each ground rule consists of a weighted satisfaction metric derived from the formula's truth value which assigns a joint probability for each possible knowledge graph as in (12).…”
Section: Knowledge Graph Construction and Probabilistic Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, the ontological and semantic rules are grounded by substituting literals into formulas. Simply put, each ground rule consists of a weighted satisfaction metric derived from the formula's truth value which assigns a joint probability for each possible knowledge graph as in (12).…”
Section: Knowledge Graph Construction and Probabilistic Modelsmentioning
confidence: 99%
“…Knowledge graph databases provide an effective solution to support the task of storing related IoT patterns. At the moment, several models and architectures are being tested for their capabilities to mine knowledge graphs from text [12], [13]. This usually involves using a hybrid of natural language processing (NLP) techniques to extract important information from large corpora of text [14].…”
Section: Introductionmentioning
confidence: 99%
“…Another study [14] performs RDF type prediction in KGs with the help of the hierarchical SLCN algorithm using a set of incoming and outgoing relations as features for classification. In [13], the authors propose a supervised hierarchical SVM classification approach for DBpedia by exploiting the abstract and the categories of the Wikipedia articles. An embedding based model is proposed for entity typing [11] considering the structural information in the KG as well as the textual entity descriptions.…”
Section: Doctoral Consortiummentioning
confidence: 99%
“…Potoniec et al propose an algorithm that extracts SubClassOf axioms from Linked Data sources and verify the correctness of the extracted axioms through crowdsourcing [35]. Kliegr et al evaluate their entity typing algorithm on a crowdsourced gold-standard data set of 2000 entities aligned with their corresponding types from the DBpedia ontology [25]. Note that here we only report on papers that made it clear already in their abstract that crowdsourcing is used for evaluation purposes, but we expect that this category of papers is much larger as it includes also papers that do not mention their evaluation approach in their abstract and were therefore not retrieved by our keyword-based search approach.…”
Section: Human Computation For Semantic Webmentioning
confidence: 99%