Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.
Abstract. Text classification (TC) is an essential field in both text mining (TM) and natural language processing (NLP).Humans have a tendency to organize and categorize everything as they want to make things easier to understand. Therefore, text classification is an important step to achieve this goal. Arabic text classification (ATC) is a difficult process because the Arabic language has complications and limitations resulting from the nature of its morphology. In this paper, a proposed approach called the Master-Slaves technique (MST) was used to improve Arabic text classification. It consists of two main phases: in the first phase, a new Arabic corpus of 16757 text files was collected. These text files were classified into five categories manually. In the second phase, four different classifiers were implemented on the collected corpus. These classifiers are Naïve Bayes (NB), K-Nearest Neighbour (KNN), Multinomial Logistic Regression (MLR) and Maximum Weight (MW). Naïve Bayes classifier was implemented as Master and the others as Slaves. The results of these slave classifiers were used to change the probability of the Naïve Bayes classifier (Master). The four classifiers used were implemented individually and the simple voting technique was implemented among them too on the collected corpus to check the effectiveness and efficiency of the proposed technique. All the tests were applied after the pre-processing of Arabic text documents (tokenization, stemming, and stop-word removal) and each document was represented as vector of weights. For the reliability of the results, 10-fold cross-validation was used in this paper. The results showed that the Master-slaves technique gives a good improvement in accuracy of text document classification with accepted algorithm complexity compared to other techniques.
Collaborative writing tools and Natural language processing are plays a vital role in learning networks. These tools are involved with the dealings among computer systems and human languages. It processes the data through syntax analysis, parsing and lexical analysis, and etc. Syntax analysis is used for syntactic parsing deals with the syntactic structure of a sentence. The collaborative writing tools and natural language processing applications are used for verb tense prediction and it it encodes the temporal order of activities in a sentence. Recognizing the syntactic structure is beneficial in identifying the means of a sentence. The model in this paper is introduced to predict verb tense based on lexical and syntactic features. This model works on English articles, every article will be split to sentences using the tokenization process. Every token in sentences will be analyzed and model will parse the sentences the use of tenses algorithms that represent grammar rules of the English language. This model is given precise accuracy when it is examined on articles/ stories.
The Iraqi dialect is one of the most beautiful dialects in the Arabic world. It contains a wide variety of vocabulary as well as phrases spoken in Iraqi population. The aim of this paper is to create, analysis, and translate of Iraqi Colloquial dialect corpus. The hardest step is to create the Iraqi corpus that were generated by writing of Iraqi films or stories and published them in GitHub. There are two phases: analyzing, which implement pre-processing includes: tokenization, remove stop words, remove punctuation, and remove duplicated words. The purpose of analyzing process is to collect the words to make Iraqi-Arabic-English dictionary. In the second phase is: testing, which involve searching about Iraqi word, translating it to Standard Arabic and English, adding it if not exist, predicting Iraqi words that depends on similar in spell, getting Iraqi synonyms words, and display the whole dictionary. New method was suggested to cover the problem of unofficially written for spoken dialects, as well as new algorithm to find the Iraqi words synonyms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.