“…They found that the single-language GREEK-BERT model they trained is better than the M-BERT model and XLM-R model that are suitable for multiple languages. In their research, Hall et al [ 42 ] conducted an extensive review of NLP models and their applications in the context of COVID-19 research. Their focus was primarily on transformer-based biomedical pretrained language models (T-BPLMs) and the sentiment analysis related to COVID-19 vaccination.…”
The transformer model is a famous natural language processing model proposed by Google in 2017. Now, with the extensive development of deep learning, many natural language processing tasks can be solved by deep learning methods. After the BERT model was proposed, many pre-trained models such as the XLNet model, the RoBERTa model, and the ALBERT model were also proposed in the research community. These models perform very well in various natural language processing tasks. In this paper, we describe and compare these well-known models. In addition, we also apply several types of existing and well-known models which are the BERT model, the XLNet model, the RoBERTa model, the GPT2 model, and the ALBERT model to different existing and well-known natural language processing tasks, and analyze each model based on their performance. There are a few papers that comprehensively compare various transformer models. In our paper, we use six types of well-known tasks, such as sentiment analysis, question answering, text generation, text summarization, name entity recognition, and topic modeling tasks to compare the performance of various transformer models. In addition, using the existing models, we also propose ensemble learning models for the different natural language processing tasks. The results show that our ensemble learning models perform better than a single classifier on specific tasks.
Graphical Abstract
“…They found that the single-language GREEK-BERT model they trained is better than the M-BERT model and XLM-R model that are suitable for multiple languages. In their research, Hall et al [ 42 ] conducted an extensive review of NLP models and their applications in the context of COVID-19 research. Their focus was primarily on transformer-based biomedical pretrained language models (T-BPLMs) and the sentiment analysis related to COVID-19 vaccination.…”
The transformer model is a famous natural language processing model proposed by Google in 2017. Now, with the extensive development of deep learning, many natural language processing tasks can be solved by deep learning methods. After the BERT model was proposed, many pre-trained models such as the XLNet model, the RoBERTa model, and the ALBERT model were also proposed in the research community. These models perform very well in various natural language processing tasks. In this paper, we describe and compare these well-known models. In addition, we also apply several types of existing and well-known models which are the BERT model, the XLNet model, the RoBERTa model, the GPT2 model, and the ALBERT model to different existing and well-known natural language processing tasks, and analyze each model based on their performance. There are a few papers that comprehensively compare various transformer models. In our paper, we use six types of well-known tasks, such as sentiment analysis, question answering, text generation, text summarization, name entity recognition, and topic modeling tasks to compare the performance of various transformer models. In addition, using the existing models, we also propose ensemble learning models for the different natural language processing tasks. The results show that our ensemble learning models perform better than a single classifier on specific tasks.
Graphical Abstract
“…A considerable amount of information is enough for analysis by the human investigator. However, the bulk amount of data from various healthcare and scientific research units is different from human intervention ( Hall, Chang & Jayne, 2022 ). The role of NLP applications in such situations is very prompt and extensive.…”
Section: Nlp For Electronic Health Recordsmentioning
One of humanity’s most devastating health crises was COVID-19. Billions of people suffered during this pandemic. In comparison with previous global pandemics that have been faced by the world before, societies were more accurate with the technical support system during this natural disaster. The intersection of data from healthcare units and the analysis of this data into various sophisticated systems were critical factors. Different healthcare units have taken special consideration to advance technical inputs to fight against such situations. The field of natural language processing (NLP) has dramatically supported this. Despite the primitive methods for monitoring the bio-metric factors of a person, the use of cognitive science has emerged as one of the most critical features during this pandemic era. One of the essential features is the potential to understand the data based on various texts and user inputs. The deployment of various NLP systems is one of the most challenging factors in handling the bulk amount of data flowing from multiple sources. This study focused on developing a powerful application to advise patients suffering from ailments related to COVID-19. The use of NLP refers to facilitating a user to identify the present critical situation and make necessary decisions while getting infected. This article also summarises the challenges associated with NLP and its usage for future NLP-based applications focusing on healthcare units. There are a couple of applications that reside for android-based systems as well as web-based chat-bot systems. In terms of security and safety, application development for iOS is more advanced. This study also explains the block meant of an application for advising COVID-19 infection. A natural language processing powered application for an iOS operating system is indeed one of its kind, which will help people who need to advise proper guidance. The article also portrays NLP-based application development for healthcare problems associated with personal reporting systems.
“…The transformer encoder generates a sequence of hidden states from the input sequence, which is subsequently passed to the decoder [18]. The output sequence is generated by the decoder by paying attention to both the encoder states and its prior outputs.…”
Arabic is a language with rich morphology and few resources. Arabic is therefore recognized as one of the most challenging languages for machine translation. The study of translation into Arabic has received significantly less attention than that of European languages. Consequently, further research into Arabic machine translation quality needs more investigation. This paper proposes a translation model between Arabic and English based on Neural Machine Translation (NMT). The proposed model employs a transformer with multi-head attention. It combines a feed-forward network with a multi-head attention mechanism. The NMT proposed model has demonstrated its effectiveness in improving translation by achieving an impressive accuracy of 97.68%, a loss of 0.0778, and a near-perfect Bilingual Evaluation Understudy (BLEU) score of 99.95. Future work will focus on exploring more effective ways of addressing the evaluation and quality estimation of NMT for low-data resource languages, which are often challenging as a result of the scarcity of reference translations and human annotators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.