With the rapid evolution of the internet, using social media networks such as Twitter, Facebook, and Tumblr, is becoming so common that they have made a great impact on every aspect of human life. Twitter is one of the most popular micro-blogging social media that allow people to share their emotions in short text about variety of topics such as company’s products, people, politics, and services. Analyzing sentiment could be possible as emotions and reviews on different topics are shared every second, which makes social media to become a useful source of information in different fields such as business, politics, applications, and services. Twitter Application Programming Interface (Twitter-API), which is an interface between developers and Twitter, allows them to search for tweets based on the desired keyword using some secret keys and tokens. In this work, Twitter-API used to download the most recent tweets about four keywords, namely, (Trump, Bitcoin, IoT, and Toyota) with a different number of tweets. “Vader” that is a lexicon rule-based method used to categorize downloaded tweets into “Positive” and “Negative” based on their polarity, then the tweets were protected in Mongo database for the next processes. After pre-processing, the hold-out technique was used to split each dataset to 80% as “training-set” and rest 20% “testing-set.” After that, a deep learning-based Document to Vector model was used for feature extraction. To perform the classification task, Radial Bias Function kernel-based support vector machine (SVM) has been used. The accuracy of (RBF-SVM) mainly depends on the value of hyperplane “Soft Margin” penalty “C” and γ “gamma” parameters. The main goal of this work is to select best values for those parameters in order to improve the accuracy of RBF-SVM classifier. The objective of this study is to show the impacts of using four meta-heuristic optimizer algorithms, namely, particle swarm optimizer (PSO), modified PSO (MPSO), grey wolf optimizer (GWO), and hybrid of PSO-GWO in improving SVM classification accuracy by selecting the best values for those parameters. To the best of our knowledge, hybrid PSO-GWO has never been used in SVM optimization. The results show that these optimizers have a significant impact on increasing SVM accuracy. The best accuracy of the model with traditional SVM was 87.885%. After optimization, the highest accuracy obtained with GWO is 91.053% while PSO, hybrid PSO-GWO, and MPSO best accuracies are 90.736%, 90.657%, and 90.557%, respectively.
Cancer is of the major reasons of human death universally, one of the deadliest types of cancer is lung cancer, that causes the highest rate of the dead in both genders combined. Detecting lung cancer in early stage does not guarantee the survive of the patient’s life but it can reduce the mortality ratio by a high degree, early detection mainly includes screening unhealthy human’s lung using most valuable imaging modality which is CT scan. Classifying nodules in lung CT images adopting an automatic computer system become a necessary task due to a huge number of situations every day to help human expert’s in decision making procedure. Over the past few years, a numerous computer system is presented, each done a certain task such as detecting, segmenting, and classifying lung tumors using dissimilar algorithms. The objective of this study is to design an automated lung nodule classification system using two distinct deep learning architectures which are Network In Network (NIN) and standard Convolution Neural Network (CNN). The two models are trained and tested using 13,500 2D cubes around the nodule location that obtained from LUNA16 dataset, the database consists of 888 3D CT scans with annotation file determined a nodule position in every scan. The models are trained with a diverse cube size and hyperparameters in order to develop a high-performance structure for each model. The experimental results showed that best achieved scores for NIN are accuracy 90%, precision 99%, recall 68%, and false positive rate 0.06%, but for the typical CNN are accuracy 90%, precision 85%, recall 85%, and false positive rate 7.52%.
In recent years, deep learning has had enormous success in speech recognition and natural language processing. In other languages, recent progress in speech recognition has been quite promising, but the Kurdish language has not seen comparable development. There are extremely few research papers on Kurdish speech recognition. In this paper, investigated Gated Recurrent Units (GRUs) which is one of the popular RNN models to recognize individual Kurdish words, and propose a very simplified deep-learning architecture to get more efficient and high accuracy model. The proposed model consists of a combination of CNN and GRU layers. The Kurdish Sorani Speech KSS dataset was created for the speech recognition system, as its 18799 sound files for 500 formal Kurdish words. Finally, the model proposed was trained with collected data and yielded over %96 accuracy. The combination of CNN an RNN (GURs) for speech recognition achieved superior performance compared to the other feed-forward deep neural network models and other statistical methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.