Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1036
|View full text |Cite
|
Sign up to set email alerts
|

Generating Natural Language Question-Answer Pairs from a Knowledge Graph Using a RNN Based Question Generation Model

Abstract: In recent years, knowledge graphs such as Freebase that capture facts about entities and relationships between them have been used actively for answering factoid questions. In this paper, we explore the problem of automatically generating question answer pairs from a given knowledge graph. The generated question answer (QA) pairs can be used in several downstream applications. For example, they could be used for training better QA systems. To generate such QA pairs, we first extract a set of keywords from enti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 61 publications
(33 citation statements)
references
References 16 publications
0
27
0
Order By: Relevance
“…In contrast to the rapid progress shown in Question Answering (QA) tasks (Rajpurkar et al, 2016;Joshi et al, 2017;Yang et al, 2018), the task of Question Generation (QG) remains understudied and challenging. However, as an important dual Context: ...during the age of enlightenment, philosophers such as john locke advocated the principle in their writings, whereas others, such as thomas hobbes, strongly opposed it.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to the rapid progress shown in Question Answering (QA) tasks (Rajpurkar et al, 2016;Joshi et al, 2017;Yang et al, 2018), the task of Question Generation (QG) remains understudied and challenging. However, as an important dual Context: ...during the age of enlightenment, philosophers such as john locke advocated the principle in their writings, whereas others, such as thomas hobbes, strongly opposed it.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, given that existing QA models often fall short by doing simple word/phrase matching rather than true comprehension (Jia and Liang, 2017), the task of QG, which usually needs complicated semantic reasoning and syntactic variation, should be another way to encourage true machine comprehension (Lewis and Fan, 2019). Recently, we have seen an increasing interest in the QG area, with mainly three categories: Textbased QG (Du et al, 2017;, Knowledge-Base-based QG (Reddy et al, 2017;Serban et al, 2016), and Image-based QG (Li et al, 2018;Jain et al, 2017). Our work focuses on the Text-based QG branch.…”
Section: Introductionmentioning
confidence: 99%
“…With the recent development of deep representation learning and large QA datasets, there has been research on recurrent neural network based approaches for question generation. Serban et al (2016) used the encoder-decoder framework to generate QA pairs from knowledge base triples; Reddy et al (2017) generated questions from a knowledge graph; studied how to generate questions from sentences using an attention-based sequence-to-sequence model and investigated the effect of exploiting sentencevs. paragraph-level information.…”
Section: Question Generationmentioning
confidence: 99%
“…Other models focus only on the performance of the QA task (Yang et al, 2017; and not explicitly on the quality of the generated questions. Apart from generating questions from text there is also research on gen-erating questions from images (Jain et al, 2017; and knowledge base (Serban et al, 2016;Reddy et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Creating newer datasets for specific domains or augmenting existing datasets with more data is a tedious, time-consuming and expensive process. To alleviate this problem and create even more training data, there is growing interest in developing techniques that can automatically generate questions from a given source, say a document , knowledge base (Reddy et al, 2017;Serban et al, 2016), or image . We refer to this task as Automatic Question Generation (AQG).…”
Section: Introductionmentioning
confidence: 99%