Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.633
|View full text |Cite
|
Sign up to set email alerts
|

Data-QuestEval: A Referenceless Metric for Data-to-Text Semantic Evaluation

Abstract: QUESTEVAL is a reference-less metric used in text-to-text tasks, that compares the generated summaries directly to the source text, by automatically asking and answering questions. Its adaptation to Data-to-Text tasks is not straightforward as it requires multimodal Question Generation and Answering systems on the considered tasks, which are seldom available. To this purpose, we propose a method to build synthetic multimodal corpora enabling to train multimodal components for a data-QuestEval metric. The resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(30 citation statements)
references
References 26 publications
0
18
0
Order By: Relevance
“…Similar to the faithfulness classifiers above, these aim to measure whether generated text contains the same information as a source or reference. Instantiations of these metrics may blank out entities (Eyal et al, 2019;Xie et al, 2021;Scialom et al, 2019), or fully generate questions (Chen et al, 2018;Rebuffel et al, 2021;Honovich et al, 2021;Deutsch et al, 2021a, inter alia).…”
Section: The Status Quomentioning
confidence: 99%
“…Similar to the faithfulness classifiers above, these aim to measure whether generated text contains the same information as a source or reference. Instantiations of these metrics may blank out entities (Eyal et al, 2019;Xie et al, 2021;Scialom et al, 2019), or fully generate questions (Chen et al, 2018;Rebuffel et al, 2021;Honovich et al, 2021;Deutsch et al, 2021a, inter alia).…”
Section: The Status Quomentioning
confidence: 99%
“…Natural Language Processing One of the sources of QG/QA methods is the thriving field of question generation from natural language processing and information retrieval Jain et al [2018]. Our approach is inspired from text generation methods where QG and QA are used to measure the quality of a generated text without using a human reference , Rebuffel et al [2021].…”
Section: Related Workmentioning
confidence: 99%
“….These two modules fulfil the role of the functions f and h defined in Section 3. EAGER is inspired from works like QuestEval and Data-QuestEval Rebuffel et al [2021], developed for natural language generation. For instance, for abstractive summarization, by generating questions from the original text (QG) and trying to answer them using the summary QA, this method measures the quantity of information shared between both texts.…”
Section: Eagermentioning
confidence: 99%
See 1 more Smart Citation
“…Then the matching score is calculated between the answer from the document and the summary. QuestEval (Rebuffel et al, 2021) not only generate (question, answer) pairs from the summary, but also from the source document, which considers to measure the recall performance.…”
Section: Setupmentioning
confidence: 99%