Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval 2017
DOI: 10.1145/3121050.3121073
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Questions from Knowledge Graphs

Abstract: We address the novel problem of automatically generating quiz-style knowledge questions from a knowledge graph such as DBpedia. Questions of this kind have ample applications, for instance, to educate users about or to evaluate their knowledge in a specific domain. To solve the problem, we propose an end-to-end approach. The approach first selects a named entity from the knowledge graph as an answer. It then generates a structured triple-pattern query, which yields the answer as its sole result. If a multiplec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(38 citation statements)
references
References 42 publications
(38 reference statements)
0
30
0
Order By: Relevance
“…the number of times distractors appear in a corpus is similar to the key) ( Kwankajornkiet et al 2016;Susanti et al 2015) selected distractors that are declared in a KB to be siblings of the key, which also implies some notion of similarity (siblings are assumed to be similar). Another approach that relies on structured knowledge sources is described in Seyler et al (2017). The authors used query relaxation, whereby queries used to generate question keys are relaxed to provide distractors that share some of the key features.…”
Section: Generation Tasksmentioning
confidence: 99%
See 3 more Smart Citations
“…the number of times distractors appear in a corpus is similar to the key) ( Kwankajornkiet et al 2016;Susanti et al 2015) selected distractors that are declared in a KB to be siblings of the key, which also implies some notion of similarity (siblings are assumed to be similar). Another approach that relies on structured knowledge sources is described in Seyler et al (2017). The authors used query relaxation, whereby queries used to generate question keys are relaxed to provide distractors that share some of the key features.…”
Section: Generation Tasksmentioning
confidence: 99%
“…Eight of these studies focus on the difficulty of questions belonging to a particular domain, such as mathematical word problems (Wang and Su 2016;Khodeir et al 2018), geometry questions (Singhal et al 2016), vocabulary questions (Susanti et al 2017a), reading comprehension questions (Gao et al 2018), DFA problems (Shenoy et al 2016), code-tracing questions (Thomas et al 2019), and medical case-based questions Kurdi et al 2019). The remaining six focus on controlling the difficulty of non-domain-specific questions (Lin et al 2015;Alsubait et al 2016;Kurdi et al 2017;Faizan and Lohmann 2018;Faizan et al 2017;Seyler et al 2017;Kumar 2015a, 2017a;Vinu et al 2016;Kumar 2017b, 2015b). Table 6 shows the different features proposed for controlling question difficulty in the aforementioned studies.…”
Section: Difficultymentioning
confidence: 99%
See 2 more Smart Citations
“…Different qualitative and quantitative analyses were carried out to evaluate questions auto-generated from domain ontologies (Alsubait et al 2014;Vinu and Kumar 2017;Seyler et al 2016;Susanti et al 2017). Papasalouros et al (2017;2011) autogenerated multiple choice questions (MCQs) from the Eupalineio Tunnel ontology, which is a domain ontology about the ancient Greek history.…”
Section: Related Workmentioning
confidence: 99%