The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1016/j.eswa.2022.116592
|View full text |Cite
|
Sign up to set email alerts
|

Natural language based analysis of SQuAD: An analytical approach for BERT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…After the article search, some automatic data extraction was performed using the BertForQuestionAnswering tool, a specific model architecture based on the Bidirectional Encoder Representations from Transformers (BERT) ( 16 ) framework, designed specifically for question-answering tasks. The model was trained using the Stanford Question Answering Dataset (SQuAD) ( 17 ) and re-trained using the specific dataset extracted from the PubMed database to enable this step. The limitation of the SQuAD dataset lies in its reliance on general knowledge derived from a set of Wikipedia articles.…”
Section: Methodsmentioning
confidence: 99%
“…After the article search, some automatic data extraction was performed using the BertForQuestionAnswering tool, a specific model architecture based on the Bidirectional Encoder Representations from Transformers (BERT) ( 16 ) framework, designed specifically for question-answering tasks. The model was trained using the Stanford Question Answering Dataset (SQuAD) ( 17 ) and re-trained using the specific dataset extracted from the PubMed database to enable this step. The limitation of the SQuAD dataset lies in its reliance on general knowledge derived from a set of Wikipedia articles.…”
Section: Methodsmentioning
confidence: 99%
“…It has promising performance in unseen NLP tasks ( Papadopoulos, Panagakis, Koubarakis, & Nicolaou, 2022 ), in medical reports processing ( Donnelly, Grzeszczuk, & Guimaraes, 2022 ). Also in text summarizing ( Patil, Rao, Reddy, Ram, & Meena, 2022 ), audio captioning ( Liu, Mei et al, 2022 ), and analysis of natural language ( Guven & Unalir, 2022 ) the BERT annotation has promising performance. Besides these uses there are three reasons why BERT is likely to be a game-changer in NLP as this is a bidirectional model which combines Mask Language Model (MLM) and Next Sentence Prediction (NSP) to understand the context-heavy text.…”
Section: Related Workmentioning
confidence: 99%
“…Medved et al [27] designed an automatic QA model using TF-IDF. Second, for machine reading-based QA models, Guven et al [28] used natural language processing (NLP) models to improve the performance, especially when unrelated sentences are included in the dataset. They introduced three kinds of NLP models to select relevant sentences: 1) remove and compare (RC), 2) searching with named entity recognition (SNER), and 3) searching with part-of-speech (POS) tagging (SPOS).…”
Section: Related Work 21 Question-answering Matching Modelsmentioning
confidence: 99%