2021
DOI: 10.3390/app12010111
|View full text |Cite
|
Sign up to set email alerts
|

You Don’t Need Labeled Data for Open-Book Question Answering

Abstract: Open-book question answering is a subset of question answering (QA) tasks where the system aims to find answers in a given set of documents (open-book) and common knowledge about a topic. This article proposes a solution for answering natural language questions from a corpus of Amazon Web Services (AWS) technical documents with no domain-specific labeled data (zero-shot). These questions have a yes–no–none answer and a text answer which can be short (a few words) or long (a few sentences). We present a two-ste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…Our system's precision, recall, and F1-Score are 82.8%, 87%, and 84.8%, respectively, which surpass the precision of 62%, recall of 87%, and F1-Score of 67% reported in other research [32]. The proposed QA system's effectiveness is affirmed by the fact that it surpasses the recall result of other research with 42.70% [33] and outperforms other research [31], [34], [35] in terms of F1-Score, which is 42.6% [31], 49% [34], and 70.8% [35]. This positions it as a leading solution for automatic document processing and information retrieval tasks across a wide range of domains.…”
Section: G Discussionmentioning
confidence: 45%
“…Our system's precision, recall, and F1-Score are 82.8%, 87%, and 84.8%, respectively, which surpass the precision of 62%, recall of 87%, and F1-Score of 67% reported in other research [32]. The proposed QA system's effectiveness is affirmed by the fact that it surpasses the recall result of other research with 42.70% [33] and outperforms other research [31], [34], [35] in terms of F1-Score, which is 42.6% [31], 49% [34], and 70.8% [35]. This positions it as a leading solution for automatic document processing and information retrieval tasks across a wide range of domains.…”
Section: G Discussionmentioning
confidence: 45%
“…To examine long-term QA-matching technology based on deep learning for psychological counseling, Chen and Xu [39] improved the matching effect by developing a deep structured semantic model (DSSM) using a bidirectional gate recurrent unit (BiGRU) and a double attention layer. Gholami and Noori [40] presented a new solution for zero-shot open-book QA. Noraset et al [41] developed QA systems using the Bi-LSTM model for Thai users.…”
Section: General Question Answeringmentioning
confidence: 99%
“…Most of these techniques start by extracting the features using machine learning and optimization algorithms, followed by training classifcation models, like a one-class classifer. However, researchers were not developed and customized for the entire issue, and such feature maps are not at their optimal [11][12][13].…”
Section: Introductionmentioning
confidence: 99%