The cryptography employed against user files makes the effect of crypto-ransomware attacks irreversible even after detection and removal. Thus, detecting such attacks early, i.e. during pre-encryption phase before the encryption takes place is necessary. Existing crypto-ransomware early detection solutions use a fixed time-based thresholding approach to determine the pre-encryption phase boundaries. However, the fixed time thresholding approach implies that all samples start the encryption at the same time. Such assumption does not necessarily hold for all samples as the time for the main sabotage to start varies among different cryptoransomware families due to the obfuscation techniques employed by the malware to change its attack strategies and evade detection, which generates different attack behaviors. Additionally, the lack of sufficient data at the early phases of the attack adversely affects the ability of feature extraction techniques in early detection models to perceive the characteristics of the attacks, which, consequently, decreases the detection accuracy. Therefore, this paper proposes a Dynamic Pre-encryption Boundary Delineation and Feature Extraction (DPBD-FE) scheme that determines the boundary of the pre-encryption phase, from which the features are extracted and selected more accurately. Unlike the fixed thresholding employed by the extant works, DPBD-FE tracks the pre-encryption phase for each instance individually based on the first occurrence of any cryptography-related APIs. Then, an annotated Term Frequency-Inverse Document Frequency (aTF-IDF) technique was utilized to extract the features from runtime data generated during the pre-encryption phase of crypto-ransomware attacks. The aTF-IDF overcomes the challenge of insufficient attack patterns during the early phases of the attack lifecycle. The experimental evaluation shows that DPBD-FE was able to determine the pre-encryption boundaries and extract the features related to this phase more accurately compared to related works.
Recently, fake news has been widely spread through the Internet due to the increased use of social media for communication. Fake news has become a significant concern due to its harmful impact on individual attitudes and the community’s behavior. Researchers and social media service providers have commonly utilized artificial intelligence techniques in the recent few years to rein in fake news propagation. However, fake news detection is challenging due to the use of political language and the high linguistic similarities between real and fake news. In addition, most news sentences are short, therefore finding valuable representative features that machine learning classifiers can use to distinguish between fake and authentic news is difficult because both false and legitimate news have comparable language traits. Existing fake news solutions suffer from low detection performance due to improper representation and model design. This study aims at improving the detection accuracy by proposing a deep ensemble fake news detection model using the sequential deep learning technique. The proposed model was constructed in three phases. In the first phase, features were extracted from news contents, preprocessed using natural language processing techniques, enriched using n-gram, and represented using the term frequency–inverse term frequency technique. In the second phase, an ensemble model based on deep learning was constructed as follows. Multiple binary classifiers were trained using sequential deep learning networks to extract the representative hidden features that could accurately classify news types. In the third phase, a multi-class classifier was constructed based on multilayer perceptron (MLP) and trained using the features extracted from the aggregated outputs of the deep learning-based binary classifiers for final classification. The two popular and well-known datasets (LIAR and ISOT) were used with different classifiers to benchmark the proposed model. Compared with the state-of-the-art models, which use deep contextualized representation with convolutional neural network (CNN), the proposed model shows significant improvements (2.41%) in the overall performance in terms of the F1score for the LIAR dataset, which is more challenging than other datasets. Meanwhile, the proposed model achieves 100% accuracy with ISOT. The study demonstrates that traditional features extracted from news content with proper model design outperform the existing models that were constructed based on text embedding techniques.
Vehicular Ad Hoc Networks (VANETs) are among the main enablers for future Intelligent Transportation Systems (ITSs) as they facilitate information sharing, which improves road safety, traffic efficiency, and provides passengers' comfort. Due to the dynamic nature of VANETs, vehicles need to exchange the Cooperative Awareness Messages (CAMs) more frequently to maintain network agility and preserve applications' performance. However, in many situations, broadcasting at a high rate leads to congest the communication channel, rendering VANET unreliable. Existing broadcasting schemes designed for VANET use partial context variables to control the broadcasting rate. Additionally, CAMs uncertainty, which is context-dependent has been neglected and a predefined fixed certainty threshold has been used instead, which is not suitable for the highly dynamic context. Consequently, vehicles disseminate a high rate of unnecessary CAMs which degrades VANET performance. A good broadcasting scheme should accurately determine which and when CAMs are broadcasted. To this end, this study proposes a Context-Aware Adaptive Cooperative Awareness Messages Broadcasting Scheme (CA-ABS) using combinations of Adaptive Kalman Filter, Autoregression, and Sequential Deep Learning and Fuzzy inference system. Four context variables have been used to represent the vehicular context, namely, individual driving behaviors, CAMs uncertainty, vehicle density, and traffic flow. Kalman Filter and Autoregression are used to estimate and predict the CAMs messages respectively. The deep learning model has been constructed to estimate the CAMs' uncertainties which is an important context variable that has been neglected in the previous research. Fuzzy Inference System takes context variables as input and determines an accurate broadcasting threshold and broadcasting interval. Extensive simulations have been conducted to evaluate the proposed scheme. Results show that the proposed scheme improves the CAMs delivery ratio and decreases the CAMs prediction errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.