Abstract:Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the corresponding QA method. We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas: (a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings, (b) an iterative process that reformulates queries focusing on terms that are not covered by e… Show more
“…As noted by previous works (Yadav et al, 2019(Yadav et al, , 2020), the answer selection performance can decrease when increasing the number of used facts k for Transformers, underlining that knowledge aggregation with increasing distracting knowledge remains an open research problem. We evaluate how our approach stacks compared with transformer-based approaches in this aspect, presented in Figure 3.…”
Section: Knowledge Aggregation With Increasing Distractorsmentioning
Constrained optimization solvers with Integer Linear programming (ILP) have been the cornerstone for explainable natural language inference during its inception. ILP based approaches provide a way to encode explicit and controllable assumptions casting natural language inference as an abductive reasoning problem, where the solver constructs a plausible explanation for a given hypothesis. While constrained based solvers provide explanations, they are often limited by the use of explicit constraints and cannot be integrated as part of broader deep neural architectures. In contrast, state-of-the-art transformer-based models can learn from data and implicitly encode complex constraints. However, these models are intrinsically black boxes. This paper presents a novel framework named ∂-Explainer (Diff-Explainer) that combines the best of both worlds by casting the constrained optimization as part of a deep neural network via differentiable convex optimization and fine-tuning pretrained transformers for downstream explainable NLP tasks. To demonstrate the efficacy of the framework, we transform the constraints presented by TupleILP and integrate them with sentence embedding transformers for the task of explainable science QA. Our experiments show up to ≈ 10% improvement over non-differentiable solver while still providing explanations for supporting its inference.
“…As noted by previous works (Yadav et al, 2019(Yadav et al, , 2020), the answer selection performance can decrease when increasing the number of used facts k for Transformers, underlining that knowledge aggregation with increasing distracting knowledge remains an open research problem. We evaluate how our approach stacks compared with transformer-based approaches in this aspect, presented in Figure 3.…”
Section: Knowledge Aggregation With Increasing Distractorsmentioning
Constrained optimization solvers with Integer Linear programming (ILP) have been the cornerstone for explainable natural language inference during its inception. ILP based approaches provide a way to encode explicit and controllable assumptions casting natural language inference as an abductive reasoning problem, where the solver constructs a plausible explanation for a given hypothesis. While constrained based solvers provide explanations, they are often limited by the use of explicit constraints and cannot be integrated as part of broader deep neural architectures. In contrast, state-of-the-art transformer-based models can learn from data and implicitly encode complex constraints. However, these models are intrinsically black boxes. This paper presents a novel framework named ∂-Explainer (Diff-Explainer) that combines the best of both worlds by casting the constrained optimization as part of a deep neural network via differentiable convex optimization and fine-tuning pretrained transformers for downstream explainable NLP tasks. To demonstrate the efficacy of the framework, we transform the constraints presented by TupleILP and integrate them with sentence embedding transformers for the task of explainable science QA. Our experiments show up to ≈ 10% improvement over non-differentiable solver while still providing explanations for supporting its inference.
“…Pre-trained embeddings with heuristics: Pre-trained embeddings have the advantage of capturing semantic similarity, going beyond the lexical overlaps limitation imposed by the use of weighting schemes. This property has been shown to be useful for multi-hop and abstractive tasks, where approaches based on pre-trained word embeddings, such as GloVe (Pennington et al, 2014), have been adopted to perform semantic alignment between question, answer and justification sentences (Yadav et al, 2020). Silva et al (2018; employ word embeddings and semantic similarity scores to perform selective reasoning on commonsense knowledge graphs and construct explanations for textual entailment.…”
Section: Explicit Modelsmentioning
confidence: 99%
“…Banerjee and Bara (2020) propose a semantic ranking model based on BERT for QASC (Khot et al, 2020) and OpenBookQA . Transformers have shown improved performance on downstream answer prediction tasks when applied in combination with explanations constructed through explicit models (Yadav et al, 2019;Yadav et al, 2020;Valentino et al, 2020a).…”
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC). We present how the representation and inference challenges evolved and the steps which were taken to tackle these challenges. We also present the evaluation methodologies to assess the performance of explainable systems. In addition, we identify persisting open research questions and highlight critical directions for future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.