2021
DOI: 10.1609/aaai.v35i18.17988
|View full text |Cite
|
Sign up to set email alerts
|

A Semantic Parsing and Reasoning-Based Approach to Knowledge Base Question Answering

Abstract: Knowledge Base Question Answering (KBQA) is a task where existing techniques have faced significant challenges, such as the need for complex question understanding, reasoning, and large training datasets. In this work, we demonstrate Deep Thinking Question Answering (DTQA), a semantic parsing and reasoning-based KBQA system. DTQA (1) integrates multiple, reusable modules that are trained specifically for their individual tasks (e.g. semantic parsing, entity linking, and relationship linking), eliminating the n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Suppose the SPARQL of the question has only one set of RDF triples. In that case, the accuracy rate is 73.83%, i.e., the number of questions correctly 9 presents the End-to-End performance of the proposed system on the LC-QuAD test set and compares it with two other QA systems, i.e., QAMP [30] and DTQA [31], respectively.…”
Section: End-to-end Performancesmentioning
confidence: 99%
“…Suppose the SPARQL of the question has only one set of RDF triples. In that case, the accuracy rate is 73.83%, i.e., the number of questions correctly 9 presents the End-to-End performance of the proposed system on the LC-QuAD test set and compares it with two other QA systems, i.e., QAMP [30] and DTQA [31], respectively.…”
Section: End-to-end Performancesmentioning
confidence: 99%
“…Table 7 presents the End-to-End performance of the proposed system on the LC-QuAD test set and compares it with two other QA systems, i.e., QAMP [33] and DTQA [34], respectively. Table 9 summarizes the number of questions in the SPARQL syntax of 604 questions in one, two, and three RDF triples, and the number of questions correctly predicted using the Ensemble BR model.…”
Section: End-to-end Performancesmentioning
confidence: 99%
“…The QALD metrics (Precision, Recall, F-measure) are used to measure the performances of QALD-7, QALD-8, and QALD-9 that are shown in Tables 8. The proposed system was compared to other systems, i.e., gAnswer2 [35], Wdaqua [11], QAwizard [14], Light-QAwizard [15], and DTQA [34]. gAnswer2 and Wdaqua have won the QALD competition in the past.…”
Section: End-to-end Performancesmentioning
confidence: 99%