2021
DOI: 10.48550/arxiv.2106.01515
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Question Answering Over Temporal Knowledge Graphs

Abstract: Temporal Knowledge Graphs (Temporal KGs) extend regular Knowledge Graphs by providing temporal scopes (e.g., start and end times) on each edge in the KG. While Question Answering over KG (KGQA) has received some attention from the research community, QA over Temporal KGs (Temporal KGQA) is a relatively unexplored area. Lack of broadcoverage datasets has been another factor limiting progress in this area. We address this challenge by presenting CRONQUESTIONS, the largest known Temporal KGQA dataset, clearly str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(18 citation statements)
references
References 17 publications
(25 reference statements)
0
16
0
Order By: Relevance
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…Other KGQA datasets are Free917 (Cai and Yates, 2013), WebQuestions (Berant et al, 2013), Com-plexQuestions (Bao et al, 2016), SimplesQuestions (Bordes et al, 2015), GraphQuestions (Su et al, 2016), WebQuestionsSP (Yih et al, 2016), 30MFactoidQA (Serban et al, 2016), ComplexWebQuestions (Talmor and Berant, 2018), PathQuestion (Zhou et al, 2018), MetaQA (Zhang et al, 2018), TempQuestions (Jia et al, 2018), TimeQuestions (Jia et al, 2021), Cron-Questions (Saxena et al, 2021), FreebaseQA (Jiang et al, 2019), Compositional Freebase Questions (CFQ) (Keysers et al, 2019), Compositional Wikidata Questions (CWQ) (Cui et al, 2021), RuBQ (Korablinov and Braslavski, 2020;Rybin et al, 2021), GrailQA (Gu et al, 2021), Event-QA (Souza Costa et al, 2020), Sim-pleDBpediaQA (Azmy et al, 2018), CLC-QuAD (Zou et al, 2021), KQA Pro (Shi et al, 2020), SimpleQues-tionsWikidata (Diefenbach et al, 2017), DBNQA (Yin et al, 2019), etc. These datasets do not fulfill our current criteria and thus are not part of the initial version of the KGQA leaderboard.…”
Section: Kgqa Datasetsmentioning
confidence: 99%
“…filter (?et2 <= ?st1) } order by desc (?et2) limit 1 Progress on Temporal KBQA is hindered by a lack of datasets that can truely assess temporal reasoning capability of existing KBQA systems. To the best of our knowledge, TempQuestions (Jia et al, 2018a), TimeQuestions (Jia et al, 2021), and CronQuestions (Saxena et al, 2021) are the only available datasets for evaluating purely this aspect. These have, however, a number of drawbacks: (a) These contain only question-answer pairs and not their intermediate SPARQL queries which could be useful in evaluating interpretability aspect of KBQA approaches based on semantic parsing (Yih et al, 2014); (b) unlike regular KBQA datasets (Dubey et al, 2019;Diefenbach et al, 2017b;Azmy et al, 2018) that can attest KBQA generality over multiple knowledge bases such as DBpedia, and Wikidata, these are suited for a single KB; (c) TempQuestions uses Freebase (Freebase) as the knowledge base, which is no longer maintained and was officially discontinued in 2014 (Freebase).…”
Section: Temporalmentioning
confidence: 99%
“…The most relevant KBQA dataset to our work is TempQuestions (Jia et al, 2018a), upon which we base TempQA-WD, as described in Section 3. Cron-Questions (Saxena et al, 2021) is another dataset where emphasis is on temporal reasoning. However, this dataset also provides a custom KB derived from Wikidata which acts as a source of truth for answering the questions provided as part of the dataset.…”
Section: Related Workmentioning
confidence: 99%
“…CronQuestions To explore whether the improved memorization of facts translates to downstream tasks, we finetune the Uniform and Temporal models on CronQuestions, a dataset of 410K time-dependent questions based on temporal knowledge graphs (Saxena et al, 2021). It consists of questions where the answer is either an entity or a temporal expression.…”
Section: Memorizing Facts Across Timementioning
confidence: 99%
“…TEMPLAMA is similar in spirit to KB-QA benchmarks which focus on temporal reasoning such as TempQuestions and Cron-Questions (Saxena et al, 2021). Its format, however, mimics the masked LM task typically used in pretraining, since it is intended as a zero/few-shot probe to measure temporally-scoped knowledge in pretrained models.…”
Section: Related Workmentioning
confidence: 99%