2019
DOI: 10.1109/access.2019.2900753
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-Based Neural Network for Answer Selection in Question Answering

Abstract: Answer selection is a crucial subtask in the question answering (QA) system. Conventional avenues for this task mainly concentrate on developing linguistic tools that are limited in both performance and practicability. Answer selection approaches based on deep learning have been well investigated with the tremendous success of deep learning in natural language processing. However, the traditional neural networks employed in existing answer selection models, i.e., recursive neural network or convolutional neura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(32 citation statements)
references
References 17 publications
(30 reference statements)
0
29
0
Order By: Relevance
“…Such sequential generation of next statement tokens, however, weakens the original meaning of the first statement (question). Recently, several models based on the Transformer (Vaswani et al 2017), such as for passage ranking (Nogueira et al 2019;Liu, Duh, and Gao 2018) and answer selection (Shao et al 2019), have been proposed to evaluate question-answering systems. There are, however, few Transformer-based methods that generate non-factoid answers.…”
Section: Related Workmentioning
confidence: 99%
“…Such sequential generation of next statement tokens, however, weakens the original meaning of the first statement (question). Recently, several models based on the Transformer (Vaswani et al 2017), such as for passage ranking (Nogueira et al 2019;Liu, Duh, and Gao 2018) and answer selection (Shao et al 2019), have been proposed to evaluate question-answering systems. There are, however, few Transformer-based methods that generate non-factoid answers.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-head attention is the foundation of Transformer, and it can jointly attend to information from different representation subspaces at different locations, thus facilitating more effective learning of long-term dependencies. In the recent literature, we have already witnessed the success of Transform for many tasks, such as dense video captioning [37], question answering [38] and relation extraction [39], which inspires us to improve multi-turn response generation with the multi-head attention mechanism.…”
Section: Attentionmentioning
confidence: 99%
“…Shao et al [30] first propose a collaborative learning mode which learns the distributed representations of question and answer by CNN and Bi-LSTM. Then, Shao et al [31] use Bi-LSTM to acquire both global information and sequential features in the question or answer sentence. The most similar work to ours is by Tan et al [32], but they address non-factoid questions.…”
Section: B Question Answeringmentioning
confidence: 99%
“…A question can have multiple ground-truth answers and an answer can correspond to multiple questions or various paraphrases of the same question. '', which can be answered with the following two single facts (triples), ('' '', ''7 '', '' 31 '') and ('' '', '' '', '' ''), respectively. Our Objective: 1) Given a collection of questions, identify all comparative questions, 2) Extract comparative elements from the identified questions, 3) Given a question q containing the extracted comparative elements and its answer candidate set C q = {a 1 , a 2 , .…”
Section: (Comparative Elements)mentioning
confidence: 99%