Traditional Peer-to-Peer (P2P) systems were restricted to sharing of files on the Internet. Although some of the more recent P2P distributed systems have tried to support transparent sharing of other types of resources, like computer processing power, but none allow and support sharing of all types of resources available on the Internet. This is mainly because the resource management part of P2P systems are custom designed in support of specific features of only one type of resource, making simultaneous access to all types of resources impractical. Another shortcoming of existing P2P systems is that they follow a client/server model of resource sharing that makes them structurally constrained and dependent on dedicated servers (resource managers). Clients must get permission from a limited number of servers to share or access resources, and resource management mechanisms run on these servers. Because resource management by servers is not dynamically reconfigurable, such P2P systems are not scalable to the ever growing extent of Internet. We present an integrated framework for sharing of all types of resources in P2P systems by using a dynamic structure for managing four basic types of resources, namely process, file, memory, and I/O, in the same way they are routinely managed by operating systems. The proposed framework allows P2P systems to use dynamically reconfigurable resource management mechanisms where each machine in the P2P system can at the same time serve both as a server and as a client. The pattern of requests for shared resources at a given time identifies which machines are currently servers and which ones are currently clients. The client server pattern changes with changes in the pattern of requests for distributed resources. Scalable P2P systems with dynamically M. Sharifi ( ) · 150 M. Sharifi et al.reconfigurable structures can thus be built using our proposed resource management mechanisms. This dynamic structure also allows for the interoperability of different P2P systems.
Abstract. Load balancing is one of the main challenges of structured P2P systems that use distributed hash tables (DHT) to map data items (objects) onto the nodes of the system. In a typical P2P system with N nodes, the use of random hash functions for distributing keys among peer nodes can lead to O(log N) imbalance. Most existing load balancing algorithms for structured P2Psystems are not proximity-aware, assume uniform distribution of objects in the system and often ignore node heterogeneity. In this paper we propose a load balancing algorithm that considers node heterogeneity, changes in object popularities, and link latencies between nodes. It also considers the load transfer time as an important factor in calculating the cost of load balancing. We present the algorithm using node movement and replication mechanisms. We also show via simulation how well the algorithm performs under different loads in a typical structured P2P system.
As technology advanced and e-commerce services expanded, credit cards became one of the most popular payment methods, resulting in an increase in the volume of banking transactions. Furthermore, the significant increase in fraud requires high banking transaction costs. As a result, detecting fraudulent activities has become a fascinating topic. In this study, we consider the use of class weighttuning hyperparameters to control the weight of fraudulent and legitimate transactions. We use Bayesian optimization in particular to optimize the hyperparameters while preserving practical issues such as unbalanced data. We propose weight-tuning as a pre-process for unbalanced data, as well as CatBoost and XGBoost to improve the performance of the LightGBM method by accounting for the voting mechanism. Finally, in order to improve performance even further, we use deep learning to fine-tune the hyperparameters, particularly our proposed weight-tuning one. We perform some experiments on real-world data to test the proposed methods. To better cover unbalanced datasets, we use recall-precision metrics in addition to the standard ROC-AUC. CatBoost, LightGBM, and XGBoost are evaluated separately using a 5-fold cross-validation method. Furthermore, the majority voting ensemble learning method is used to assess the performance of the combined algorithms. LightGBM and XGBoost achieve the best level criteria of ROC-AUC = 0.95, precision 0.79, recall 0.80, F1 score 0.79, and MCC 0.79, according to the results. By using deep learning and the Bayesian optimization method to tune the hyperparameters, we also meet the ROC-AUC = 0.94, precision = 0.80, recall = 0.82, F1 score = 0.81, and MCC = 0.81. This is a significant improvement over the cutting-edge methods we compared it to.
The emergence of Big Data applications has paved the way for enterprises to use Big Data as a value-creation strategy for their business; however, the majority of enterprises fail to know how to generate value from their massive volumes of data. Big Data Analytics results can help the enterprises in better decision-making and provide them with additional profits. Studying different researches dedicated to value creation through Big Data Analytics. This paper (a) highlights the current state of the art proposed for creating value from Big Data Analytics, (b) identifies the essential factors and discusses their effects upon value creation, and (c) provides a classification of the cutting-edge technologies in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.