Proceedings of the Second Workshop on Scholarly Document Processing 2021
DOI: 10.18653/v1/2021.sdp-1.13
|View full text |Cite
|
Sign up to set email alerts
|

CNLP-NITS @ LongSumm 2021: TextRank Variant for Generating Long Summaries

Abstract: The huge influx of published papers in the field of machine learning makes the task of summarization of scholarly documents vital, not just to eliminate the redundancy but also to provide a complete and satisfying crux of the content. We participated in LongSumm 2021: The 2 nd Shared Task on Generating Long Summaries for scientific documents, where the task is to generate long summaries for scientific papers provided by the organizers. This paper discusses our extractive summarization approach to solve the tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…The proposed models for the extended summary generation task include jointly learning to predict sentence importance and sentence section to extract top sentences (Sotudeh et al, 2020); utilizing section-contribution computations to pick sentences from important section for forming the final summary (Ghosh Roy et al, 2020); identifying salient sections for generating abstractive summaries (Gidiotis et al, 2020); ensembling of extraction and abstraction models to form final summary (Ying et al, 2021); an extractive model with TextRank algorithm equipped with BM25 as similarity function (Kaushik et al, 2021); and incorporating sentences embeddings into graph-based extractive summarizer in an unsupervised manner (Ramirez-Orta and Milios, 2021). Unlike these works, we do not exploit any sectional nor citation information in this work.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed models for the extended summary generation task include jointly learning to predict sentence importance and sentence section to extract top sentences (Sotudeh et al, 2020); utilizing section-contribution computations to pick sentences from important section for forming the final summary (Ghosh Roy et al, 2020); identifying salient sections for generating abstractive summaries (Gidiotis et al, 2020); ensembling of extraction and abstraction models to form final summary (Ying et al, 2021); an extractive model with TextRank algorithm equipped with BM25 as similarity function (Kaushik et al, 2021); and incorporating sentences embeddings into graph-based extractive summarizer in an unsupervised manner (Ramirez-Orta and Milios, 2021). Unlike these works, we do not exploit any sectional nor citation information in this work.…”
Section: Related Workmentioning
confidence: 99%