2021
DOI: 10.1007/s10462-021-09964-4
|View full text |Cite
|
Sign up to set email alerts
|

Study of automatic text summarization approaches in different languages

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 33 publications
(7 citation statements)
references
References 60 publications
0
5
0
Order By: Relevance
“…The formulae to compute this network is shown in Equations (( 10 ), ( 11 ), and ( 12 )). where is the backward state GRU, is the forward state GRU, ⊕ indicates the concatenation operation of two vectors, is input at time t 29 , 30 .…”
Section: Methodsmentioning
confidence: 99%
“…The formulae to compute this network is shown in Equations (( 10 ), ( 11 ), and ( 12 )). where is the backward state GRU, is the forward state GRU, ⊕ indicates the concatenation operation of two vectors, is input at time t 29 , 30 .…”
Section: Methodsmentioning
confidence: 99%
“…Anand and Wagh (Anand and Wagh, 2022) explored effective deep learning approaches tailored for summarizing legal texts, highlighting the importance of domain-specific summarization solutions. Multilingual summarization aims to handle diverse languages beyond English, with Kumar et al (Kumar et al, 2021) surveying techniques for languages such as Chinese, Arabic, Persian, Hindi, Tamil, and Bengali. Neural network approaches have shown promise in this domain by relying less on language-specific resources.…”
Section: Multilingual and Query-based Summarizationmentioning
confidence: 99%
“…This was because the dataset was based on human summaries. Kumar et al [15] introduced a model for building a network in which text phrases are depicted as nodes and the relationship between different sentences were represented as the weight of the edge linking them. In contrast to traditional cosine similarity, which treats words identically, a modified reversed sentence frequency-cosine similarity was constructed to assign various weights to distinct terms in the document.…”
Section: Extractive Text Summarizationmentioning
confidence: 99%