Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366423.3380009
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Dual-Cascade Learning with Pseudo-Feedback Distillation for Query-Focused Extractive Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(19 citation statements)
references
References 17 publications
0
19
0
Order By: Relevance
“…These baselines generate the summaries of all documents in a document set which are then ranked using RoBERTa MS-MARCO . Moreover, we compare our model with four recent works: i) CES-50 (Feigenblat et al, 2017), ii) RSA (Baumel et al, 2018), iii) QUERYSUM (Xu and Lapata, 2020), and iv) DUAL-CES (Roitman et al, 2020 (Baumel et al, 2018), we find that our model outperforms them in all datasets in terms of both R-1 and R-2 Recall, but fails to outperform them in R-SU4 scores. Moreover, we find based on paired t-test (p ≤ .05) that the weakly supervised learning significantly outperforms the baselines in terms of both Recall and F1.…”
Section: Resultsmentioning
confidence: 74%
See 2 more Smart Citations
“…These baselines generate the summaries of all documents in a document set which are then ranked using RoBERTa MS-MARCO . Moreover, we compare our model with four recent works: i) CES-50 (Feigenblat et al, 2017), ii) RSA (Baumel et al, 2018), iii) QUERYSUM (Xu and Lapata, 2020), and iv) DUAL-CES (Roitman et al, 2020 (Baumel et al, 2018), we find that our model outperforms them in all datasets in terms of both R-1 and R-2 Recall, but fails to outperform them in R-SU4 scores. Moreover, we find based on paired t-test (p ≤ .05) that the weakly supervised learning significantly outperforms the baselines in terms of both Recall and F1.…”
Section: Resultsmentioning
confidence: 74%
“…They adopted the Pointer Generation Network (PGN) (See et al, 2017) pre-trained for the generic abstractive summarization task in a large dataset to predict the query focused summaries in the target dataset via modifying the attention mechanism of the PGN model. However, their model failed to outperform different extractive approaches in terms of various ROUGE scores (Feigenblat et al, 2017;Roitman et al, 2020).…”
Section: Related Workmentioning
confidence: 98%
See 1 more Smart Citation
“…Therefore, search snippet generation can be considered as one kind of Query-focused Summarization (QFS). Similar to generic document summarization, this body of work can also be divided into extractive approaches (Zhu et al, 2019;Feigenblat et al, 2017;Roitman et al, 2020) and abstractive approaches (Laskar et al, 2020a;Baumel et al, 2018;Chen et al, 2020a;Su et al, 2020;Laskar et al, 2020b). As some PTMs are proved to be effective in text generation, most existing work adopted PTMs to generate abstractive snippets.…”
Section: Snippet Generationmentioning
confidence: 99%
“…We take the two top central customer and agent sentences (2+2). Cross Entropy Summarizer (extractive)-CES is an unsupervised, extractive summarizer (Roitman et al, 2020;Feigenblat et al, 2017), which considers the summarization problem as a multi-criteria optimization over the sentences space, where several summary quality objectives are considered. The aim is to select a subset of sentences optimizing these quality objectives.…”
Section: Baselinesmentioning
confidence: 99%