Sensor deployment problem is one of the important problems in Wireless Sensor Networks (WSN) since it represents the first phase that most of the network operations depends on. Sensor deployment strategies can be classified into two classes which are deterministic and autonomous (random) deployment. In the deterministic deployment, the deployment field is assumed accessible as well as the number of sensors is small to be manually deployed in specific locations. On the other hand, with large number of sensors and in inaccessible fields, the random deployment to the sensors turns out to be the solution. However, random deployment requires sensors to be automatically located (move) for coverage and connectivity purposes. In addition, after a period of time, the sensors topology might change due to some sensor hardware failure or deplaned energy. Therefore, redeployment and/or sensors relocation process is essential.Nevertheless, mobility consumed energy as well as sensor load balancing are essential factors to be considered during the initial deployment and relocation processes. This paper proposes two deployment algorithms to manage those situations. Those algorithms achieve sensor energy balancing and small amount of deployment energy consumption.A set of simulation experiments are conducted to compare between the proposed algorithm and the existing work in terms of coverage performance, average moving distance, and message complexity.Keyw1ords-mobile sensor networks, deployment, clustering, potential field, redundant sensors.
Virtual screening is the most critical process in drug discovery, and it relies on machine learning to facilitate the screening process. It enables the discovery of molecules that bind to a specific protein to form a drug. Despite its benefits, virtual screening generates enormous data and suffers from drawbacks such as high dimensions and imbalance. This paper tackles data imbalance and aims to improve virtual screening accuracy, especially for a minority dataset. For a dataset identified without considering the data’s imbalanced nature, most classification methods tend to have high predictive accuracy for the majority category. However, the accuracy was significantly poor for the minority category. The paper proposes a K-mean algorithm coupled with Synthetic Minority Oversampling Technique (SMOTE) to overcome the problem of imbalanced datasets. The proposed algorithm is named as KSMOTE. Using KSMOTE, minority data can be identified at high accuracy and can be detected at high precision. A large set of experiments were implemented on Apache Spark using numeric PaDEL and fingerprint descriptors. The proposed solution was compared to both no-sampling method and SMOTE on the same datasets. Experimental results showed that the proposed solution outperformed other methods.
Today, the number of users of social network increases and a lot of users share opinions on different aspects of life every day. So the rate of colloquial written text increases dramatically as a medium of expressing ideas especially across the WWW. Therefore, social networks are rich sources of data for opinion mining and sentiment analysis. Arab colloquial dialects are languages that people used to communicate with each other in social networks. Recently, there is a massive amount of Arab colloquial data on Social networks. By increasing the available data, the needing for processing this data and using it is increased. However, most available tools and resources (morphological analyzers, disambiguation systems, annotated data, and parallel corpora) are for Modern Standard Arabic (MSA). Therefore, the need for the automatic transformation from Arab colloquial dialects to Modern Standard Arabic becomes urgent to use Modern Standard Arabic tools and resources for Arab colloquial dialects. The most famous colloquial is Egyptian colloquial dialect, which is considered the most widely used and understood dialect throughout the Arab world. Consequently, the focus of the proposed system is the Egyptian colloquial dialect to prove our approach.
Humans recognize each other according to their various characteristics. For example, a father can recognize his daughter by her face when he meets her and by her voice when she speaks to him. In information technology, biometrics is defined as a method to measure and analyze human body characteristics such as iris, DNA, fingerprints, facial patterns, retina and hand measurements, and so on. For authentication purposes, biometrics is a method of recognizing humans depend on unique physical or behavioral characteristics [1,2]. Not all human traits can be used as a biometric, but it should be characterized by: *Author for correspondence Universality: all persons must possess the biometric trait. Distinctiveness: the biometric traits should be unique for each person as DNA. Permanence: a good biometric trait is invariant or changes slowly over time. Collectability: a biometric trait should be easily collected quantitatively.In addition, the biometric system should be characterized by: Acceptability: The extent for users to accept the daily using of biometric identifiers. Performance:according to requirements, accuracy, speed, and robustness. Circumvention: it should counteract to fraudsters [3].
Mobility management in vehicular networks is our case study is to provide internet connectivity without any interruption and with no packet loss, even in V2I (Vehicular to Infrastructure) or V2V (Vehicular to Vehicular) communications. Handover delay is one of critical parameters in QoS measurements in addition to packet loss, throughput and data transmission delay. In this paper, the idea of Smart Buffering is proposed to enhance HI-NEMO protocol. In this extension of NEMO, the combining cross-layer mechanism and resource allocation have been performed. It is used to reduce latency and packet loss during handovers with high performance in its proactive mode. However, it is noticed that packets loss exists in its reactive mode during the period of link down in small coverage cell radius of base station during vehicle movement. Smart Buffering mechanism mostly prevents packet loss by buffering lost candidate packets in Root FMA (Foreign Mobile Agent), forwarding and reordering it in new FMA. It also performs redundant packet removing in Root FMA. Mathematical analysis proves that Enhanced HI-NEMO protocol prevents packet loss during reactive handover and gives optimal throughput with supporting high velocity vehicles.
An unprecedented development in biomedical data has been observed in latest years. The capability to analyze a large portion of this data will offer many opportunities that will in turn affect the future of health care [1]. In this age, traditional storage and processing techniques are not sufficient to meet the demand and hence, computing techniques must scale to handle the huge volume of data. The main difficulty in managing these data is the speed at which they are generated, that is, data generation is much faster than the available computer resources for data analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.