The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1109/access.2022.3170466
|View full text |Cite
|
Sign up to set email alerts
|

Adaptable Closed-Domain Question Answering Using Contextualized CNN-Attention Models and Question Expansion

Abstract: In closed-domain Question Answering (QA), the goal is to retrieve answers to questions within a specific domain. The main challenge of closed-domain QA is to develop a model that only requires small datasets for training since large-scale corpora may not be available. One approach is a flexible QA model that can adapt to different closed domains and train on their corpora. In this paper, we present a novel versatile reading comprehension style approach for closed-domain QA (called CA-AcdQA). The approach is ba… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 50 publications
0
2
0
Order By: Relevance
“…We used Stanford CoreNLP 79 and settings provided in Reference 76 for document analysis and candidate answers selection in CAI module. We have utilized the Answer Sentence Natural Questions (ASNQ) 80 derived from the Google Natural Questions (NQ) dataset 81 for training the the CNN and multi‐head attention based answer selector component.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We used Stanford CoreNLP 79 and settings provided in Reference 76 for document analysis and candidate answers selection in CAI module. We have utilized the Answer Sentence Natural Questions (ASNQ) 80 derived from the Google Natural Questions (NQ) dataset 81 for training the the CNN and multi‐head attention based answer selector component.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…We have utilized CAI module introduced in Reference 76 which has six functions based on linguistic and syntactic features and patterns for reducing the document to sentences (candidate answer sentences) that could answer the given question. We designed a joint CNN and multi‐head attention neural network to analyze and assign a score to each candidate answer sentence based on its relevance to the question.…”
Section: Our Novel Question‐driven Hybrid Text Summarization Modelmentioning
confidence: 99%