Medical database classification problems can be considered as complex optimization problems to assure the diagnosis support precisely. In healthcare, several computer researchers have employed different deep learning (DL) approaches to enhance the classification performance. Besides, encryption is an effective way to offer secure transmission of medical data over public network. With this motivation, this paper presents new privacy‐preserving encryption with DL based medical data transmission and classification (PPEDL‐MDTC) model. The presented model derives multiple key‐based homomorphic encryption (MHE) technique with sailfish optimization (SFO), called MHE‐SFO algorithm‐based encryption process. In addition, the cross‐entropy based artificial butterfly optimization‐based feature selection technique and optimal deep neural network (ODNN) based classification is carried out. In ODNN model, the hyperparameter optimization of the DNN model is carried out utilizing the use of chemical reaction optimization (CRO) algorithm. The proposed method has been simulated utilizing Python 3.6.5 tool, which is tested using activity recognition and sleep stage dataset. A detailed comparative outcomes analysis makes sure the higher efficiency of the PPEDL‐MDTC on the state of art techniques with the detection accuracy of 0.9813 and 0.9650 on the applied activity recognition and University College Dublin Sleep Stage dataset.
Information availability is a key factor in the acquisition of knowledge. Access to information either in the general area or even in more specific ones like sciences, languages, and religion become wider since the use of semantics in World Wide Web. Semantic Web technologies assist in the acquiring of information by creating processes that link information to another. However, the technology supports mostly languages using Latin family scripts. Arabic is still not well supported. This paper, reports on the survey of the support for Arabic in some of the existing Semantic Web technologies, and give future scenario in applying Semantic Web for Arabic applications. Finally, multilingual support for these new technologies is also discussed.
Social media data is unstructured data where these big data are exponentially increasing day to day in many different disciplines. Analysis and understanding the semantics of these data are a big challenge due to its variety and huge volume. To address this gap, unstructured Arabic texts have been studied in this work owing to their abundant appearance in social media Web sites. This work addresses the difficulty of handling unstructured social media texts, particularly when the data at hand is very limited. This intelligent data augmentation technique that handles the problem of less availability of data are used. This article has proposed a novel architecture for hand Arabic words classification and understands based on convolutional neural networks (CNNs) and recurrent neural networks. Moreover, the CNN technique is the most powerful for the analysis of Arabic tweets and social network analysis. The main technique used in this work is character-level CNN and a recurrent neural network stacked on top of one another as the classification architecture. These two techniques give 95% accuracy in the Arabic texts dataset.
Automated Essay Scoring (AES) is one of the most challenging problems in Natural Language Processing (NLP). The significant challenges include the length of the essay, the presence of spelling mistakes affecting the quality of the essay and representing essay in terms of relevant features for the efficient scoring of essays. In this work, we present a comparative empirical analysis of Automatic Essay Scoring (AES) models based on combinations of various feature sets. We use 30manually extracted features, 300-word2vec representation, and 768-word embedding features using BERT model and forms different combinations for evaluating the performance of AES models. We formulate an automated essay scoring problem as a rescaled regression problem and quantized classification problem. We analyzed the performance of AES models for different combinations. We compared them against the existing ensemble approaches in terms of Kappa Statistics and Accuracy for rescaled regression problem and quantized classification problem respectively. A combination of 30-manually extracted features, 300-word2vec representation, and 768-word embedding features using BERT model results up to 77.2 ± 1.7 of Kappa statistics for rescaled regression problem and 75.2 ± 1.0 of accuracy value for Quantized Classification problem using a benchmark dataset consisting of about 12,000 essays divided into eight groups. The reporting results provide directions to the researchers in the field to use manually extracted features along with deep encoded features for developing a more reliable AES model.
Abstract-During the last decade, enormous volumes of urban data have been produced by the Government agencies, the NGOs and the citizens. In such a scenario, we are presented with a diverse set of data which holds valuable information. This information can be extracted and analyzed and have a number of usages for the well-being of citizens. The major impediment to achieve this goal is the data itself, the available data are redundant, scattered and come with various legacy formats. Data interoperability, scalability and integration are paramount issues which could not be resolved unless the scattered data silos are accessible with a standard representation. In this paper, we propose a framework that resolves the data interoperability and associated challenges in the smart city environment. The framework takes the raw smart city data from several resources and stores them in a NoSQL database. The framework transforms the scattered data into machine-processable data. Besides, the database is linked with an API and simple dashboard for further analysis, which can be utilized to build big data applications based on urban data so that government agencies can get a summarized overview of resource distribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.