Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.464
|View full text |Cite
|
Sign up to set email alerts
|

R2D2: Robust Data-to-Text with Replacement Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(14 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Similar to pretraining in other domains, table-to-text pre-training endeavours to create a generalizable model and jointly learn an enhanced representation of both tabular and textual data. This process usually hinges on the aggregation or synthesis of substantial amounts of data, coupled with the formulation of appropriate pretext tasks and objectives (Nan et al, 2022;Zhao et al, 2022). Nonetheless, table pre-training confronts discernible constraints, such as high consumption of computing resources and relatively poor generalizability.…”
Section: Logic Table-to-text Generationmentioning
confidence: 99%
See 3 more Smart Citations
“…Similar to pretraining in other domains, table-to-text pre-training endeavours to create a generalizable model and jointly learn an enhanced representation of both tabular and textual data. This process usually hinges on the aggregation or synthesis of substantial amounts of data, coupled with the formulation of appropriate pretext tasks and objectives (Nan et al, 2022;Zhao et al, 2022). Nonetheless, table pre-training confronts discernible constraints, such as high consumption of computing resources and relatively poor generalizability.…”
Section: Logic Table-to-text Generationmentioning
confidence: 99%
“…LoFT (Zhao et al, 2023), based on BART-large (Lewis et al, 2020), utilizes logic forms both as fact validators and content planners to control the generative process. R2D2 (Nan et al, 2022) trains T5-base (Raffel et al, 2020) to function both as a generator and a faithfulness discriminator, incorporating supplementary replacement detection and unlikelihood learning tasks.…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…These techniques can be incorporated into a broad range of applications, including but not limited to game strategy development, financial analysis, and human resources management. However, existing fine-tuned table-to-text generation models (Nan et al, 2022a;Liu et al, 2022b,a;Zhao et al, 2023b) are typically task-specific, limiting their adaptability to real-world applications.…”
Section: Introductionmentioning
confidence: 99%