Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.135
|View full text |Cite
|
Sign up to set email alerts
|

Semantically Driven Sentence Fusion: Modeling and Evaluation

Abstract: Sentence fusion is the task of joining related sentences into coherent text. Current training and evaluation schemes for this task are based on single reference ground-truths and do not account for valid fusion variants. We show that this hinders models from robustly capturing the semantic relationship between input sentences. To alleviate this, we present an approach in which ground-truth solutions are automatically expanded into multiple references via curated equivalence classes of connective phrases. We ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…Following Geva et al (2019), we report Exact match, which is the percentage of exactly correctly predicted fusions. In addition to the T5 baseline and the text-editing baselines LASERTAG-GER , FELIX (Mallinson et al, 2020), and Seq2Edits, an autoregressive textediting model (Stahlberg and Kumar, 2020), we also report state-of-the-art sequence-to-sequence models ROBERTASHARE (Rothe et al, 2020b), based on ROBERTA large, and AugBERT (Ben-David et al, 2020), based on BERT base.…”
Section: Sentence Fusionmentioning
confidence: 99%
“…Following Geva et al (2019), we report Exact match, which is the percentage of exactly correctly predicted fusions. In addition to the T5 baseline and the text-editing baselines LASERTAG-GER , FELIX (Mallinson et al, 2020), and Seq2Edits, an autoregressive textediting model (Stahlberg and Kumar, 2020), we also report state-of-the-art sequence-to-sequence models ROBERTASHARE (Rothe et al, 2020b), based on ROBERTA large, and AugBERT (Ben-David et al, 2020), based on BERT base.…”
Section: Sentence Fusionmentioning
confidence: 99%
“…It is typically utilized as an intermediate step to improve downstream tasks. Ben-David et al (2020) train a model to learn both the discourse relation and discourse connective together in a multi-task framework. In our work, similar to Rothe et al (2020), we fuse two sentences together by training a model to learn the appropriate insertion of a discourse connective.…”
Section: Related Workmentioning
confidence: 99%