Abstract:In closed-domain Question Answering (QA), the goal is to retrieve answers to questions within a specific domain. The main challenge of closed-domain QA is to develop a model that only requires small datasets for training since large-scale corpora may not be available. One approach is a flexible QA model that can adapt to different closed domains and train on their corpora. In this paper, we present a novel versatile reading comprehension style approach for closed-domain QA (called CA-AcdQA). The approach is ba… Show more
“…We used Stanford CoreNLP 79 and settings provided in Reference 76 for document analysis and candidate answers selection in CAI module. We have utilized the Answer Sentence Natural Questions (ASNQ) 80 derived from the Google Natural Questions (NQ) dataset 81 for training the the CNN and multi‐head attention based answer selector component.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…We have utilized CAI module introduced in Reference 76 which has six functions based on linguistic and syntactic features and patterns for reducing the document to sentences (candidate answer sentences) that could answer the given question. We designed a joint CNN and multi‐head attention neural network to analyze and assign a score to each candidate answer sentence based on its relevance to the question.…”
Section: Our Novel Question‐driven Hybrid Text Summarization Modelmentioning
Question‐driven automatic text summarization is a popular technique to produce concise and informative answers to specific questions using a document collection. Both query‐based and question‐driven summarization may not produce reliable summaries nor contain relevant information if they do not take advantage of extractive and abstractive summarization mechanisms to improve performance. In this article, we propose a novel extractive and abstractive hybrid framework designed for question‐driven automatic text summarization. The framework consists of complimentary modules that work together to generate an effective summary: (1) discovering appropriate non‐redundant sentences as plausible answers using an open‐domain multi‐hop question answering system based on a convolutional neural network, multi‐head attention mechanism and reasoning process; and (2) a novel paraphrasing generative adversarial network model based on transformers rewrites the extracted sentences in an abstractive setup. Experiments show this framework results in more reliable abstractive summary than competing methods. We have performed extensive experiments on public datasets, and the results show our model can outperform many question‐driven and query‐based baseline methods (an R1, R2, RL increase of 6%–7% for over the next highest baseline).
“…We used Stanford CoreNLP 79 and settings provided in Reference 76 for document analysis and candidate answers selection in CAI module. We have utilized the Answer Sentence Natural Questions (ASNQ) 80 derived from the Google Natural Questions (NQ) dataset 81 for training the the CNN and multi‐head attention based answer selector component.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…We have utilized CAI module introduced in Reference 76 which has six functions based on linguistic and syntactic features and patterns for reducing the document to sentences (candidate answer sentences) that could answer the given question. We designed a joint CNN and multi‐head attention neural network to analyze and assign a score to each candidate answer sentence based on its relevance to the question.…”
Section: Our Novel Question‐driven Hybrid Text Summarization Modelmentioning
Question‐driven automatic text summarization is a popular technique to produce concise and informative answers to specific questions using a document collection. Both query‐based and question‐driven summarization may not produce reliable summaries nor contain relevant information if they do not take advantage of extractive and abstractive summarization mechanisms to improve performance. In this article, we propose a novel extractive and abstractive hybrid framework designed for question‐driven automatic text summarization. The framework consists of complimentary modules that work together to generate an effective summary: (1) discovering appropriate non‐redundant sentences as plausible answers using an open‐domain multi‐hop question answering system based on a convolutional neural network, multi‐head attention mechanism and reasoning process; and (2) a novel paraphrasing generative adversarial network model based on transformers rewrites the extracted sentences in an abstractive setup. Experiments show this framework results in more reliable abstractive summary than competing methods. We have performed extensive experiments on public datasets, and the results show our model can outperform many question‐driven and query‐based baseline methods (an R1, R2, RL increase of 6%–7% for over the next highest baseline).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.