Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.339
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Abstract Meaning Representation for Knowledge Base Question Answering

Abstract: Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 57 publications
(45 citation statements)
references
References 32 publications
0
35
0
Order By: Relevance
“…One the one hand, the use of AMR as an intermediate representation for e.g. inferencing for question answering (Sachan and Xing, 2016;Kapanipathi et al, 2021) and human-robot interaction (Bastianelli et al, 2013) has been proven. So far, inferencing from AMRs has necessarily remained limited to sentence-level information, or has been based on ad-hoc meaning representations spanning multiple sentence-level AMRs.…”
Section: Goals Of Umr Annotationmentioning
confidence: 99%
“…One the one hand, the use of AMR as an intermediate representation for e.g. inferencing for question answering (Sachan and Xing, 2016;Kapanipathi et al, 2021) and human-robot interaction (Bastianelli et al, 2013) has been proven. So far, inferencing from AMRs has necessarily remained limited to sentence-level information, or has been based on ad-hoc meaning representations spanning multiple sentence-level AMRs.…”
Section: Goals Of Umr Annotationmentioning
confidence: 99%
“…12 We also include the QALD-9-AMR corpus, 13 containing human annotated gold AMRs in AMR3.0 style from the QALD-9 corpus. 14 QALD-9 (Usbeck et al, 2018) is a corpus of natural language questions for executable semantic parsing, used in (Kapanipathi et al, 2021), among other works. BioAMR is distinct from AMR2.0 treebank data mostly in terms of vocabulary and named entities.…”
Section: Domain Adaptationmentioning
confidence: 99%
“…However, the richness of the information included in AMR graphs, as well as their obvious applications as an interface between human and machines, make both AMR parsing and generation very rewarding problems to solve. As a matter of fact, AMR has been successfully applied to diverse downstream applications, such as Machine Translation (Song et al, 2019), Text Summarization (Hardy and Vlachos, 2018;Liao et al, 2018), Human-Robot Interaction (Bonial et al, 2020a), Information Extraction (Rao et al, 2017) and, more recently, Question Answering (Lim et al, 2020;Bonial et al, 2020b;Kapanipathi et al, 2021). However, since AMR graphs for such applications are obtained automatically through an AMR parser, the benefits of AMR integration are highly correlated with the performance of the underlying parser across various data distributions and domains.…”
Section: Introductionmentioning
confidence: 99%