Cloud computing utilizes enormous clusters of serviceable and manageable resources that can be virtually and dynamically recon gured in order to deliver optimum resource utilization by exploiting the pay-per-use model. However, concerns around security have been an impediment in the extensive adoption of the cloud computing model. In this regard, advancements in cryptography, accelerated by the wide usage of the internet worldwide, has emerged as a key area in addressing some of these security concerns. In this document, a hybrid cryptographic protocol deploying Blow sh and Paillier encryption algorithms has been presented and its strength compared with the existing hybrid Advanced Encryption Standard (AES) and Rivest Shamir Adleman (RSA) techniques. Algorithms for secure data storage protocol in two phases have been presented. The proposed hybrid protocol endeavors to improve the power of cloud storage through a decrease in computation time and ciphertext size. Simulations have been carried out with Oracle Virtual Box and Fog server used on an Ubuntu 16.04 platform. This grouping of asymmetric and homomorphic procedures has demonstrated enhanced security. Compression usage has helped in decreasing the storage space and computation time. Performance analysis in terms of computation overhead and quality of service parameters like loads of parameters with and without attacks, throughput, and stream length for different modes of block cipher mode has been carried out. Security analysis has been carried out by utilizing the Hardening Index as an audit parameter using Lynis 2.7.1. Similarly, for halting the aforementioned approaches and for regulating traf c, rewall protection has been generated in the chosen hybrid algorithms. Finally, enhancements in the performance of the Paillier and Blow sh hybrid scheme with and without compression
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. A wealth of operational data can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure.
This article describes a new framework, called SIMPLE, for settingup and maintaining classic WLCG sites with minimal operational efforts and insights needed into the WLCG middleware. The framework provides a single common interface to install and configure any of its supported grid services, such as Compute Elements, Batch Systems, Worker Nodes and miscellaneous middleware packages. It leverages modern container orchestration tools like Kubernetes, Docker Swarm, and confiuration management tools like Puppet, Ansible, to automate deployment of the WLCG services on behalf of a site admin. The framework is modular and extensible by design. Therefore, it is easy to add support for more grid services as well as infrastructure automation tools to accommodate diverse scenarios at different sites. We provide insight into the design of the framework and our efforts towards development, release and deployment of its first implementation featuring CREAM E, TORQUE Batch System and TORQUE based Worker Nodes.
As a joint effort from various communities involved in the Worldwide LHC Computing Grid, the Operational Intelligence project aims at increasing the level of automation in computing operations and reducing human interventions. The distributed computing systems currently deployed by the LHC experiments have proven to be mature and capable of meeting the experimental goals, by allowing timely delivery of scientific results. However, a substantial number of interventions from software developers, shifters, and operational teams is needed to efficiently manage such heterogenous infrastructures. Under the scope of the Operational Intelligence project, experts from several areas have gathered to propose and work on “smart” solutions. Machine learning, data mining, log analysis, and anomaly detection are only some of the tools we have evaluated for our use cases. In this community study contribution, we report on the development of a suite of operational intelligence services to cover various use cases: workload management, data management, and site operations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.