Proceedings of the 2015 International Conference on the Theory of Information Retrieval 2015
DOI: 10.1145/2808194.2809473
|View full text |Cite
|
Sign up to set email alerts
|

Entity Linking in Queries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 39 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…Another interesting work is that presented by Hasibi et al [25] (a follow-up of the NTNU-UiS entity-annotator presented at the ERD Challenge), where the authors describe a method to solve the query-level entity-linking problem. The method is based on three phases: (i) candidate annotations are generated aiming at maximum recall, by exploiting two sources: DBpedia and Google's Freebase Annotations of the ClueWeb Corpora (FACC); (ii) candidate annotations are assigned a score by combining, via a generative model, the Mixture of Language Models (MLM) with a commonness score; (iii) interpretations are iteratively generated by picking non-overlapping annotations, starting from the ones with higher score.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Another interesting work is that presented by Hasibi et al [25] (a follow-up of the NTNU-UiS entity-annotator presented at the ERD Challenge), where the authors describe a method to solve the query-level entity-linking problem. The method is based on three phases: (i) candidate annotations are generated aiming at maximum recall, by exploiting two sources: DBpedia and Google's Freebase Annotations of the ClueWeb Corpora (FACC); (ii) candidate annotations are assigned a score by combining, via a generative model, the Mixture of Language Models (MLM) with a commonness score; (iii) interpretations are iteratively generated by picking non-overlapping annotations, starting from the ones with higher score.…”
Section: Related Workmentioning
confidence: 99%
“…This entityannotator outperforms TagMe, which is used as a baseline. Hasibi et al [25] address the problem of semantic mapping, i.e., finding the ranked list of pertinent entities, possibly without explicit mentions in the query (for example, Ann_Dunham for query obama mother). In a follow up [26], they propose an annotator that employs supervised learning for the entity ranking step while tackling disambiguation with an unsupervised algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…What is more, even methods that are designed in particular for short text perform significantly worse on queries than those that are specifically devised for queries [19]. • Another fundamental difference is that when documents are annotated with entities, "it is implicitly assumed that the text provides enough context for each entity mention to be resolved unambiguously" [32]. For queries, on the other hand, there is only limited context, or none at all.…”
Section: Entity Linking In Queriesmentioning
confidence: 99%
“…Semantic linking refers to the task of identifying entities "that are intended or implied by the user issuing the query" [53]. This problem was introduced as query mapping by Meij et al [53] and is also known as semantic mapping [32] and as (ranked) concepts to Wikipedia [18]. As the name we have adopted suggests, we seek to find entities that are semantically related to the query.…”
Section: Semantic Linkingmentioning
confidence: 99%
See 1 more Smart Citation