Mobile edge computing (MEC) provides effective cloud services and functionality at the edge device, to improve the quality of service (QoS) of end users by offloading the high computation tasks. Currently, the introduction of deep learning (DL) and hardware technologies paves a method in detecting the current traffic status, data offloading, and cyberattacks in MEC. This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC (AIMDO-SMEC) systems. The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks (SNN) to determine the traffic status in the MEC system. Also, an adaptive sampling cross entropy (ASCE) technique is utilized for data offloading in MEC systems. Moreover, the modified salp swarm algorithm (MSSA) with extreme gradient boosting (XGBoost) technique was implemented to identification and classification of cyberattack that exist in the MEC systems. For examining the enhanced outcomes of the AIMDO-SMEC technique, a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDO-SMEC technique with the minimal completion time of tasks (CTT) of 0.680.
In recent years, Blockchain technology has been highly valued and disruptive. Several research has presented a merge between blockchain and current application i.e. medical, supply chain, and e-commerce. Although Blockchain architecture does not have a standard yet, IBM, MS, AWS offer BaaS (Blockchain as a Service). In addition to the current public chains i.e. Ethereum, NEO, and Cardeno. There are some differences between several public ledgers in terms of development and architecture. This paper introduces the main factors that affect integration of Artificial Intelligence with Blockchain. As well as, how it could be integrated for forecasting and automating; building self-regulated chain.
Early detection of Parkinson's Disease (PD) using the PD patients' voice changes would avoid the intervention before the identification of physical symptoms. Various machine learning algorithms were developed to detect PD detection. Nevertheless, these ML methods are lack in generalization and reduced classification performance due to subject overlap. To overcome these issues, this proposed work apply graph long short term memory (GLSTM) model to classify the dynamic features of the PD patient speech signal. The proposed classification model has been further improved by implementing the recurrent neural network (RNN) in batch normalization layer of GLSTM and optimized with adaptive moment estimation (ADAM) on network hidden layer. To consider the importance of feature engineering, this proposed system use Linear Discriminant analysis (LDA) for dimensionality reduction and Sparse Auto-Encoder (SAE) for extracting the dynamic speech features. Based on the computation of energy content transited from unvoiced to voice (onset) and voice to voiceless (offset), dynamic features are measured. The PD datasets is evaluated under 10 fold cross validation without sample overlap. The proposed smart PD detection method called RNN-GLSTM-ADAM is numerically experimented with persistent phonations in terms of accuracy, sensitivity, and specificity and Matthew correlation coefficient. The evaluated result of RNN-GLSTM-ADAM extremely improves the PD detection accuracy than static feature based conventional ML and DL approaches.
Due to the advanced developments of the Internet and information technologies, a massive quantity of electronic data in the biomedical sector has been exponentially increased. To handle the huge amount of biomedical data, automated multi-document biomedical text summarization becomes an effective and robust approach of accessing the increased amount of technical and medical literature in the biomedical sector through the summarization of multiple source documents by retaining the significantly informative data. So, multi-document biomedical text summarization acts as a vital role to alleviate the issue of accessing precise and updated information. This paper presents a Deep Learning based Attention Long Short Term Memory (DL-ALSTM) Model for Multi-document Biomedical Text Summarization. The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing. Then, the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents. In order to tune the summarization performance of the DL-ALSTM model, chaotic glowworm swarm optimization (CGSO) algorithm is employed. Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset. Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models.
The present spreading out of big data found the realization of AI and machine learning. With the rise of big data and machine learning, the idea of improving accuracy and enhancing the efficacy of AI applications is also gaining prominence. Machine learning solutions provide improved guard safety in hazardous traffic circumstances in the context of traffic applications. The existing architectures have various challenges, where data privacy is the foremost challenge for vulnerable road users (VRUs). The key reason for failure in traffic control for pedestrians is flawed in the privacy handling of the users. The user data are at risk and are prone to several privacy and security gaps. If an invader succeeds to infiltrate the setup, exposed data can be malevolently influenced, contrived, and misrepresented for illegitimate drives. In this study, an architecture is proposed based on machine learning to analyze and process big data efficiently in a secure environment. The proposed model considers the privacy of users during big data processing. The proposed architecture is a layered framework with a parallel and distributed module using machine learning on big data to achieve secure big data analytics. The proposed architecture designs a distinct unit for privacy management using a machine learning classifier. A stream processing unit is also integrated with the architecture to process the information. The proposed system is apprehended using real-time datasets from various sources and experimentally tested with reliable datasets that disclose the effectiveness of the proposed architecture. The data ingestion results are also highlighted along with training and validation results.
This article primarily focuses on the performance evaluation of a new methodology, imputation by feature importance (IBFI), to serve its imputed dataset in further regression scenarios when dealing with soil radon gas concentration (SRGC) time-series data. The time-series data have been collected spanning over fourteen(14) months period, which included four seismic events, and have been used for experimentation. The imputation by feature importance (IBFI) has been experimented and obtained results are found more efficient in the imputation of missing patterns in investigated time series when compared to traditionally used imputation methods viz. mean, median, mode, predictive mean matching (PMM), and hot-deck imputation.The IBFI methodology has been used in a variety of settings, such as data missing not at random (MNAR), missing completely at random (MCAR), and missing at random (MAR), with missingness percentages ranging from 10% to 30%. In this study, the imputed datasets, 9 for each imputation method, have been used further to predict the attribute of interest (radon concentration (RN)) keeping others as independent attributes such as thoron, temperature, relative humidity, and pressure time series. Support vector machine (SVM) with linear kernel has been used as a learning algorithm and its performance was evaluated based on the fact that how efficient and unbiased values were imputed. Statistical performance evaluation measures viz. root mean squared log error (RMSLE), root mean square error (RMSE), mean squared error (MSE),and mean absolute percentage error (MAPE) have been calculated for the assessment of performance. The findings of our study show that the IBFI imputed dataset has provided a betterfitted model. The model generation and predictions upon IBFI imputed time series result in more accurate predictions when compared to mean, median, mode, PMM, and hot-deck imputed time series. Furthermore, PMM and median imputed time series also perform closer to the IBFI imputed time series.
Context and Motivation]Before eliciting and gathering requirements for a software project, it is considered pivotal to know about concerned stakeholders. It becomes hard to elicit the actual system requirements without identifying relevant stakeholders, leading the software project to failure. Despite the paramount importance of stakeholder identification in requirement elicitation, it has been given less attention in the software engineering literature.[Method] For this purpose, we conducted a thorough Systematic Literature Review (SLR) on stakeholder identification (SI) and its methods in requirement elicitation. However, previously, a literature study on SI in the requirement elicitation was conducted. We found that no one has proposed any standard or baseline research method for stakeholder identification, stakeholder assessment, and stakeholder interaction up to date according to our knowledge. It provides an opportunity to update the current SLR on SI in requirements elicitation from 2011 till 2021 to search for a baseline methodology for the SI. For this purpose, we explored the existing literature research that involves the SI methods in requirements elicitation. [Principle Ideas/Results] Furthermore, we identify and capture seventeen research methodologies for SI, eight key stakeholders interaction methods, and ten stakeholders assessment methods in requirement elicitation. To further enhance the stakeholder identification process, we additionally identify pivotal information such as different potential stakeholder categories, stakeholder assessments methods, and stakeholder interaction methods. Also, based on the proposed SLR, we find out the existing gaps and new opportunities for SI methods in the requirement elicitation. [Contribution] These SI methodologies help requirements engineers and practitioners identify key stakeholders and efficiently improve the requirements quality. Moreover, this research study helps identify the effective practices used for the traditional and CrowdRE SI, recover consequences that can affect the effectiveness of SI, and recommend advisable SI practices to be employed in the future. This research study would help the software researchers and developers efficiently and accurately identify correct and concerned stakeholders to improve end-user satisfaction instead of considering it a self-evident task. INDEX TERMS Stakeholder Identification, stakeholder methods, Requirement Elicitation, SLR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.