2020
DOI: 10.1109/access.2020.2988903
|View full text |Cite
|
Sign up to set email alerts
|

Recent Trends in Deep Learning Based Open-Domain Textual Question Answering Systems

Abstract: Open-domain textual question answering (QA), which aims to answer questions from large data sources like Wikipedia or the web, has gained wide attention in recent years. Recent advancements in open-domain textual QA are mainly due to the significant developments of deep learning techniques, especially machine reading comprehension and neural-network-based information retrieval, which allows the models to continuously refresh state-of-the-art performances. However, a comprehensive review of existing approaches … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(18 citation statements)
references
References 92 publications
0
14
0
Order By: Relevance
“…Since the original RNNs are unable to learn the dependency found in input data especially when the gap is large, LSTM, due to the proposed gate functions, could handle such a problem well [20]. In practice, the powerful learning capacity of the LSTM method makes it one of the most used DL architectures and has been widely used in many fields, such as sentiment analysis [15,21,22], question answering systems [23], sentence embedding [24], and text classification [25].…”
Section: Long Short-term Memorymentioning
confidence: 99%
See 2 more Smart Citations
“…Since the original RNNs are unable to learn the dependency found in input data especially when the gap is large, LSTM, due to the proposed gate functions, could handle such a problem well [20]. In practice, the powerful learning capacity of the LSTM method makes it one of the most used DL architectures and has been widely used in many fields, such as sentiment analysis [15,21,22], question answering systems [23], sentence embedding [24], and text classification [25].…”
Section: Long Short-term Memorymentioning
confidence: 99%
“…The skip-gram model computes the conditional probability of a word by predicting the surrounding context words given the central target word. The CBOW does the opposite of skip-gram, by computing the conditional probability of a target word give the context words surrounding it across a window of size k [23]. Mathematically, both CBOW (Equation ( 1)) and skip-gram (Equation ( 2)) models are trained as follows:…”
Section: Word Embeddingsmentioning
confidence: 99%
See 1 more Smart Citation
“…The third challenge is to gather responses from multiple sources to compose an answer. As in document retrieval, multiple neural ranking models are proposed to retrieve answers that are relevant to a given user's question (Guo et al, 2019;Abbasiyantaeb & Momtazi, 2020;Huang et al, 2020). The neural ranking models for QA cover all five proposed categories for document retrieval with a focus on the semantic matching signal between questions and answers.…”
Section: Question-answeringmentioning
confidence: 99%
“…Entity Search and Question Answering. Entity-centric search and question answering are broad areas that cover a variety of information-seeking needs, see surveys like [2,9,18,28]. As far as quantities are concerned, lookups are supported by many methods, over both knowledge graphs and text documents, and are part of major benchmarks, such as QALD [36], NaturalQuestions [22], ComplexWebQuestions [35], LC-QuAD [12] and others.…”
Section: Related Workmentioning
confidence: 99%