For evaluating generation systems, automatic metrics such as BLEU cost nothing to run but have been shown to correlate poorly with human judgment, leading to systematic bias against certain model improvements. On the other hand, averaging human judgments, the unbiased gold standard, is often too expensive. In this paper, we use control variates to combine automatic metrics with human evaluation to obtain an unbiased estimator with lower cost than human evaluation alone. In practice, however, we obtain only a 7-13% cost reduction on evaluating summarization and open-response question answering systems. We then prove that our estimator is optimal: there is no unbiased estimator with lower cost. Our theory further highlights the two fundamental bottlenecks-the automatic metric and the prompt shown to human evaluatorsboth of which need to be improved to obtain greater cost savings. arXiv:1807.02202v1 [cs.CL] 6 Jul 2018Question and reference answer System answer (System; Corr / ROUGE-L)Examples where system is correct and ROUGE-L > 0.5 (19.6% or 285 of 1455 unique responses) Q. what is anti-mullerian hormone A. Anti-Mullerian Hormone (AMH) is a protein hormone produced by granulosa cells (cells lining the egg sacs or follicles) within the ovary.it is a protein hormone produced by granulosa cells (cells lining the egg sacs or follicles) within the ovary. (snet.ens; / 0.86) Examples where system is incorrect and ROUGE-L > 0.5 (1.3% or 19 of 1455 unique responses) Q. at what gestational age can you feel a fetus move A. 37 to 41 weeks (incorrect reference answer) 37 to 41 weeks (fastqa, fastqa.ext; × / 1.0) Examples where system is correct and ROUGE-L < 0.5 (56.0% or 815 of 1455 unique responses) Q. what is the definition of onomatopoeia A. It is defined as a word, which imitates the natural sounds of a thing. the naming of a thing or action by a vocal imitation of the sound associated with it (as buzz, hiss). (fastqa; / 0.23) Examples where system is incorrect and ROUGE-L < 0.5 (23.1% or 336 of 1455 unique responses) Q. what kind root stem does a dandelion have A. Fibrous roots and hollow stem. vitamin a, vitamin c, vitamin d and vitamin b complex, as well as zinc, iron and potassium. (snet, snet.ens; × / 0.09) (a) MS MARCO. Human annotators rated answer correctness (AnyCorrect) and the automatic metric used is ROUGE-L (higher is better).Reference summary System summary (System; Edit / VecSim) Examples where system Edit < 0.3 and VecSim > 0.5 (53.9% or 1078 of 2000 responses)Bhullar is set to sign a -day contract with the Kings.The -year-old will become the NBA's first player of Indian descent. Bhullar will be on the roster when the Kings host New Orleans Pelicans.Bhullar andThe Kings are signing Bhullar to a -day contract.The -year-old will be on the roster on friday when David Wear's -season contract expires thursday. Bhullar is set to become the NBA's first player of Indian descent. (ml; 0.13 / 0.82)Examples where system Edit > 0.3 and VecSim > 0.5 (18.0% or 360 of 2000 responses)
In statistical relational learning, one is concerned with inferring the most likely explanation (or world) that satisfies a given set of weighted constraints. The weight of a constraint signifies our confidence in the constraint, and the most likely world that explains a set of constraints is simply a satisfying assignment that maximizes the weights of satisfied constraints. The relational learning community has developed specialized solvers (e.g., Alchemy and Tuffy) for such weighted constraints independently of the work on SMT solvers in the verification community. In this paper, we show how to leverage SMT solvers to significantly improve the performance of relational solvers. Constraints associated with a weight of 1 (or 0) are called axioms because they must be satisfied (or violated) by the final assignment. Axioms can create difficulties for relational solvers. We isolate the burden of axioms to SMT solvers and only lazily pass information back to the relational solver. This information can either be a subset of the axioms, or even generalized axioms (similar to predicate generalization in verification). We implemented our algorithm in a tool called Soft-Cegar that outperforms state-of-the-art relational solvers Tuffy and Alchemy over four real-world applications. We hope this work opens the door for further collaboration between relational learning and SMT solvers.
Knowledge base population (KBP) systems take in a large document corpus and extract entities and their relations. Thus far, KBP evaluation has relied on judgements on the pooled predictions of existing systems. We show that this evaluation is problematic: when a new system predicts a previously unseen relation, it is penalized even if it is correct. This leads to significant bias against new systems, which counterproductively discourages innovation in the field. Our first contribution is a new importance-sampling based evaluation which corrects for this bias by annotating a new system's predictions ondemand via crowdsourcing. We show this eliminates bias and reduces variance using data from the 2015 TAC KBP task. Our second contribution is an implementation of our method made publicly available as an online KBP evaluation service. We pilot the service by testing diverse state-ofthe-art systems on the TAC KBP 2016 corpus and obtain accurate scores in a cost effective manner.
To understand a sentence like "whereas only 10% of White Americans live at or below the poverty line, 28% of African Americans do" it is important not only to identify individual facts, e.g., poverty rates of distinct demographic groups, but also the higher-order relations between them, e.g., the disparity between them. In this paper, we propose the task of Textual Analogy Parsing (TAP) to model this higher-order meaning. The output of TAP is a frame-style meaning representation which explicitly specifies what is shared (e.g., poverty rates) and what is compared (e.g., White Americans vs. African Americans, 10% vs. 28%) between its component facts. Such a meaning representation can enable new applications that rely on discourse understanding such as automated chart generation from quantitative text. We present a new dataset for TAP, baselines, and a model that successfully uses an ILP to enforce the structural constraints of the problem.
How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. "$131 million is about the cost to employ everyone in Texas over a lunch period". First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F 1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.