Proceedings of the First Workshop on Scholarly Document Processing 2020
DOI: 10.18653/v1/2020.sdp-1.27
|View full text |Cite
|
Sign up to set email alerts
|

IIITBH-IITP@CL-SciSumm20, CL-LaySumm20, LongSumm20

Abstract: In this paper, we present the IIIT Bhagalpur and IIT Patna team's effort to solve the three shared tasks namely, CL-SciSumm 2020, CL-LaySumm 2020, LongSumm 2020 at SDP 2020. The themes of these tasks are to generate medium-scale, lay and long summaries, respectively, for scientific articles. For the first two tasks, unsupervised systems are developed, while for the third one, we have developed a supervised system. The performances of all the systems are evaluated on the associated datasets with the shared task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…For analyzing our proposed model's performances on CL-SciSumm-2020 Corpus, we have used R-2 and R-SU4 F-1 scores (as the other comparable models are reported with these metrics) We have experimented to generate abstract and human summaries. As benchmarks, we have selected the research works submitted to CL-SciSumm-2019/2020: (1) Jaccard-focused GCN (Umapathy et al, 2020): an extractive summarizer utilizing cross-sentence graph and graph attention networks, (2) Clustering (Mishra et al, 2020): based on different clustering algorithms followed by sentencescoring functions, (3) MMR2 (Reddy et al, 2020): based on the maximal marginal relevance technique, and (4) LSTM+BabelNet (Chiruzzo et al, 2019): BabelNet vectors were used to train the LSTM. The CL-SciSumm task provides a performance metric evaluation script which is used to calculate the R-2 and R-SU4 values for the modelgenerated summaries against the test set.…”
Section: Results: Cl-scisumm-2020 Corpusmentioning
confidence: 99%
“…For analyzing our proposed model's performances on CL-SciSumm-2020 Corpus, we have used R-2 and R-SU4 F-1 scores (as the other comparable models are reported with these metrics) We have experimented to generate abstract and human summaries. As benchmarks, we have selected the research works submitted to CL-SciSumm-2019/2020: (1) Jaccard-focused GCN (Umapathy et al, 2020): an extractive summarizer utilizing cross-sentence graph and graph attention networks, (2) Clustering (Mishra et al, 2020): based on different clustering algorithms followed by sentencescoring functions, (3) MMR2 (Reddy et al, 2020): based on the maximal marginal relevance technique, and (4) LSTM+BabelNet (Chiruzzo et al, 2019): BabelNet vectors were used to train the LSTM. The CL-SciSumm task provides a performance metric evaluation script which is used to calculate the R-2 and R-SU4 values for the modelgenerated summaries against the test set.…”
Section: Results: Cl-scisumm-2020 Corpusmentioning
confidence: 99%
“…The majority of recent work in disinformation analysis has been conducted with Computational Linguistic methods (Ruffo, Semeraro, Giachanou, & Rosso, 2023). Some recent (and straightforward) approaches put together an NLP pipeline (preprocessing, feature extraction, model building) to exploit traditional text analysis techniques (Asaad & Erascu, 2018;Koloski, Pollak, & Škrlj, 2020) or to train neural networks (Reddy, Suman, Saha, & Bhattacharyya, 2020;Umer et al, 2020;Eldesoky & Moussa, 2021;Qazi, Khan, & Ali, 2020;Tida, Hsu, & Hei, 2022;Dun, Tu, Chen, Hou, & Yuan, 2021) for fake news classification.…”
Section: Natural Language Processing For Stylistic Characterizationmentioning
confidence: 99%
“…They also suggested an abstractive summarizer based on the BART transformer that runs after the extractive summarizer. Other methods were Convolutional Neural Network (CNN) in (Reddy et al, 2020), Graph Convolutional Network (GCN) and Graph Attention Network (GAN) in (Li et al, 2020), and unsupervised clustering in (Mishra et al, 2020) and (Ju et al, 2020).…”
Section: Related Workmentioning
confidence: 99%