Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume 2021
DOI: 10.18653/v1/2021.eacl-main.220
|View full text |Cite
|
Sign up to set email alerts
|

StructSum: Summarization via Structured Representations

Abstract: ive text summarization aims at compressing the information of a long source document into a rephrased, condensed summary. Despite advances in modeling techniques, abstractive summarization models still suffer from several key challenges: (i) layout bias: they overfit to the style of training corpora; (ii) limited abstractiveness: they are optimized to copying n-grams from the source rather than generating novel abstractive summaries; (iii) lack of transparency: they are not interpretable. In this work, we prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 32 publications
(53 reference statements)
0
3
0
Order By: Relevance
“…While promising, prior approaches often produce imprecise summaries containing errors with utterly different semantics and meanings from the original text. This is because they fail to capitalize on the structured linguistic content existing in documents or can not explicitly model the dependencies between nested complex factual pieces [10]. Most recent works address this problem by introducing a fact-driven strategy [11][12][13][14].…”
Section: Introductionmentioning
confidence: 99%
“…While promising, prior approaches often produce imprecise summaries containing errors with utterly different semantics and meanings from the original text. This is because they fail to capitalize on the structured linguistic content existing in documents or can not explicitly model the dependencies between nested complex factual pieces [10]. Most recent works address this problem by introducing a fact-driven strategy [11][12][13][14].…”
Section: Introductionmentioning
confidence: 99%
“…Syntactic parsing is a fundamental problem for Natural Language Processing in its pursuit towards deep understanding and computer-friendly representation of human linguistic input. Parsers are in charge of efficiently and accurately providing syntactic information so that it can be used for downstream artificial intelligence applications such as machine translation (Zhang, Li, Fu and Zhang, 2019;Yang, Wong, Chao and Zhang, 2020;Zhang, Li, Fu and Zhang, 2021), opinion mining (Zhang, Zhang, Wang, Li and Zhang, 2020), relation and event extraction (Nguyen and Verspoor, 2019), question answering (Cao, Liang, Li and Lin, 2021), summarization (Balachandran, Pagnoni, Lee, Rajagopal, Carbonell and Tsvetkov, 2021), sentiment classification (Bai, Wang, Chen, Yang, Bai, Yu and Tong, 2021), sentence classification (Zhang et al, 2021) or semantic role labeling and named entity recognition (Sachan, Zhang, Qi and Hamilton, 2021), among others.…”
Section: Introductionmentioning
confidence: 99%
“…This syntactic information accurately provided by parsers as dependency trees has been demonstrated highly useful for a huge variety of Natural Language Processing (NLP) tasks. In particular, dependency parsing has been recently used for machine translation (Zhang et al, 2019;Yang et al, 2020;, opinion mining (Zhang et al, 2020a), relation and event extraction (Nguyen and Verspoor, 2019), question answering (Cao et al, 2021), sentiment classification (Bai et al, 2021), sentence classification , summarization (Balachandran et al, 2021) or semantic role labeling and named entity recognition (Sachan et al, 2021), among others.…”
Section: Introductionmentioning
confidence: 99%