2021
DOI: 10.1007/978-3-030-63591-6_63
|View full text |Cite
|
Sign up to set email alerts
|

Utilizing Bidirectional Encoder Representations from Transformers for Answer Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Earlier work primarily relies on feature engineering and linguistic information [60]. However, the advancement of deep learning introduces powerful models [61], [62] that outperform traditional methods without the need of manual efforts or feature engineering.…”
Section: Answer Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Earlier work primarily relies on feature engineering and linguistic information [60]. However, the advancement of deep learning introduces powerful models [61], [62] that outperform traditional methods without the need of manual efforts or feature engineering.…”
Section: Answer Selectionmentioning
confidence: 99%
“…Once the extended dataset is generated, we can apply any answer selection method to the dataset. There are a number of studies in the literature on this topic, including COALA [61], CETE [62], MTQA [79], and many more, among which CETE is considered state-of-the-art in the answer selection task by the ACL community 4 . CETE implements a transformer-based encoder (e.g., BERT) to encode the question and answer pair into a single vector and calculates the probability that a pair of question/answer should match or not 5 .…”
Section: Experimental Settingsmentioning
confidence: 99%
“…With the widespread application of pre-trained language model, the accuracy of answer selection task can also be improved by pre-trained method. Laskar et al [35] used pre-trained language model BERT for answer selection tasks, which can effectively leverage the context of each word in a sentence and improve performance of the model.…”
Section: Answer Selectionmentioning
confidence: 99%
“…Then, we fine-tune the pre-trained transformer-based summarization model on each individual document using the weak reference summaries. In this way, we generate the summary of each individual document and then select the most relevant sentences as the final summary using a transformer-based answer selection model (Laskar, Huang, and Hoque 2020;Laskar, Hoque, and Huang 2020b). In another approach, instead of training our model on each document, we again utilize a transformer-based answer selection model and construct a filtered input document via selecting the sentences (up to n tokens) in the document set that are most relevant to the query.…”
Section: Extending Preqfas For Long Sequences In the Md-qfas Taskmentioning
confidence: 99%