2021 International Conference on Computational Performance Evaluation (ComPE) 2021
DOI: 10.1109/compe53109.2021.9752056
|View full text |Cite
|
Sign up to set email alerts
|

Graph Based Extractive News Articles Summarization Approach leveraging Static Word Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…ROUGE-1, ROUGE-2, ROUGE-L, BLEU-1, BLEU-2, BLEU-3, BLEU-4, F-measure (of 1, 2 and 3 grams), WEEM4TSw, WEEM4TSg and WEEM4TSf. Barman et al (2021) It represents each word by GLOVE vectors and each sentence by the average of the words it contains. In order to build the network, each sentence represents a node and the weights of the edges are evaluated by the cosine similarity.…”
Section: Newsroom Summarization Datasetmentioning
confidence: 99%
“…ROUGE-1, ROUGE-2, ROUGE-L, BLEU-1, BLEU-2, BLEU-3, BLEU-4, F-measure (of 1, 2 and 3 grams), WEEM4TSw, WEEM4TSg and WEEM4TSf. Barman et al (2021) It represents each word by GLOVE vectors and each sentence by the average of the words it contains. In order to build the network, each sentence represents a node and the weights of the edges are evaluated by the cosine similarity.…”
Section: Newsroom Summarization Datasetmentioning
confidence: 99%
“…Many attention has been given to the use of word embedding in TextRank algorithm for text summarization [12]- [16]. These works use different kinds of word embeddings, such as Global Vectors for word representation (GloVe) [12], [14], [17], Word2Vec [13], [14], FastText [14], and sentence-BERT (SBERT) [15], [16]. All of these works, however, did not use weighted word embedding, so the embedding is not weighed according to the collection statistics.…”
Section: Introductionmentioning
confidence: 99%