2020
DOI: 10.1007/978-3-030-49461-2_32
|View full text |Cite
|
Sign up to set email alerts
|

ESBM: An Entity Summarization BenchMark

Abstract: Entity summarization is the problem of computing an optimal compact summary for an entity by selecting a size-constrained subset of triples from RDF data. Entity summarization supports a multiplicity of applications and has led to fruitful research. However, there is a lack of evaluation efforts that cover the broad spectrum of existing systems. One reason is a lack of benchmarks for evaluation. Some benchmarks are no longer available, while others are small and have limitations. In this paper, we create an En… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…In this section, we illustrate how to tokenize the objects of triples and construct the tokenized formal context using the following triples of the actual entity "3W AY F M " in ESBM dataset [34]:…”
Section: Tokenized Formal Context Constructionmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, we illustrate how to tokenize the objects of triples and construct the tokenized formal context using the following triples of the actual entity "3W AY F M " in ESBM dataset [34]:…”
Section: Tokenized Formal Context Constructionmentioning
confidence: 99%
“…The real-world dataset ESBM 1 we employed in experiments is available in [34], which contains two benchmark datasets for evaluating entity summarization. ESBM is currently the largest available benchmark dataset that can be found in the real-world.…”
Section: Datasets and Implementationmentioning
confidence: 99%
See 2 more Smart Citations
“…ESA encodes triples using graph embedding (TransE), and employs BiLSTM with supervised attention mechanism. Although it outperformed unsupervised methods, the improvement reported in [12] was rather marginal, around +7% compared with unsupervised FACES-E [4] on the ESBM benchmark [8]. It inspired us to explore more effective deep learning models for the task of general-purpose entity summarization.…”
Section: Introductionmentioning
confidence: 99%