2017
DOI: 10.48550/arxiv.1704.08300
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Diversity driven Attention Model for Query-based Abstractive Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(34 citation statements)
references
References 0 publications
0
33
0
1
Order By: Relevance
“…Datasets We use multiple QA datasets, including SQuAD (Rajpurkar et al, 2016), NewsQA (Trischler et al, 2016), TriviaQA (Joshi et al, 2017), SearchQA (Dunn et al, 2017), HotpotQA (Yang et al, 2018) and NaturalQuestions (Kwiatkowski et al, 2019) to train HLTC-MRQA, following Su et al (2019). We evaluate our model on the Debatepedia dataset (Nema et al, 2017) and DUC2005-7 dataset (in Appendix).…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Datasets We use multiple QA datasets, including SQuAD (Rajpurkar et al, 2016), NewsQA (Trischler et al, 2016), TriviaQA (Joshi et al, 2017), SearchQA (Dunn et al, 2017), HotpotQA (Yang et al, 2018) and NaturalQuestions (Kwiatkowski et al, 2019) to train HLTC-MRQA, following Su et al (2019). We evaluate our model on the Debatepedia dataset (Nema et al, 2017) and DUC2005-7 dataset (in Appendix).…”
Section: Methodsmentioning
confidence: 99%
“…QFS is a more complex task that aims to generate a summary according to the query and its relevant document(s). Nema et al (2017) proposed an encode-attend-decode system with an additional query attention mechanism and diversity-based attention mechanism to generate a more queryrelevant summary. Baumel et al (2018) rated query relevance into a pre-trained abstractive summarizer to make the model aware of the query, while Xu and Lapata (2020a) discovered a new type of connection between generic summaries and QFS queries, and provided a universal representation for them which allows generic summarization data to be further exploited for QFS.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations