Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080751
|View full text |Cite
|
Sign up to set email alerts
|

DBpedia-Entity v2

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 69 publications
(21 citation statements)
references
References 16 publications
0
15
0
Order By: Relevance
“…We performed a series of experiments to answer the research questions stated in the introduction and to find the best configuration of our model by implementing KEWER and evaluating it on the DBpedia-Entity v2 dataset [13]. We implemented random walk generation ourselves and used gensim [29] for the Skip-Gram-based optimization step.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…We performed a series of experiments to answer the research questions stated in the introduction and to find the best configuration of our model by implementing KEWER and evaluating it on the DBpedia-Entity v2 dataset [13]. We implemented random walk generation ourselves and used gensim [29] for the Skip-Gram-based optimization step.…”
Section: Methodsmentioning
confidence: 99%
“…Entity similarity information obtained from entity embeddings was successfully utilized for re-ranking the results of termbased retrieval models in [14,17,44] using a learning-to-rank approach. A publicly available benchmark for entity search based on DBpedia [16] and its more recent version [13], which provides graded relevance judgments obtained using crowdsourcing and subsequent conflict resolution by experts, are standard test collections for evaluating entity search methods.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations