Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1488
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Question Answering on Knowledge Bases and Text

Abstract: Interpretability of machine learning (ML) models becomes more relevant with their increasing adoption. In this work, we address the interpretability of ML based question answering (QA) models on a combination of knowledge bases (KB) and text documents. We adapt post hoc explanation methods such as LIME and input perturbation (IP) and compare them with the self-explanatory attention mechanism of the model. For this purpose, we propose an automatic evaluation paradigm for explanation methods in the context of QA… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 12 publications
(12 reference statements)
0
16
0
Order By: Relevance
“…In unsupervised approaches, many QA systems have relied on structured knowledge base (KB) QA. For example, several previous works have used ConceptNet (Speer et al, 2017) to keep the QA process interpretable (Khashabi et al, 2018b;Sydorova et al, 2019). However, the construction of such structured knowledge bases is expensive, and may need frequent updates.…”
Section: Related Workmentioning
confidence: 99%
“…In unsupervised approaches, many QA systems have relied on structured knowledge base (KB) QA. For example, several previous works have used ConceptNet (Speer et al, 2017) to keep the QA process interpretable (Khashabi et al, 2018b;Sydorova et al, 2019). However, the construction of such structured knowledge bases is expensive, and may need frequent updates.…”
Section: Related Workmentioning
confidence: 99%
“…The model proposed by Xiong et al (2019) contains a graph-attention based KB reader and a knowledge-aware text reader. Some other work focuses on retrieving a small graph that contains just the question-related information (Sun et al, 2019) and the interpretability of QA on KB and text (Sydorova et al, 2019). These methods lack considering the high-order relationship among the entities contained in the text.…”
Section: Related Workmentioning
confidence: 99%
“…Interpretability Interpretability is an important line of research recently (Jiang et al, 2019;Sydorova et al, 2019;Asai et al, 2020). The nearest neighbor approach (Simard et al, 1993) is appealing in that we can explicitly know which training example triggers each prediction.…”
Section: Discussion and Related Workmentioning
confidence: 99%