Current large distributed systems allow users to share and trade resources. In cloud computing, users purchase different types of resources from one or more resource providers using a fixed pricing scheme. Federated clouds, a topic of recent interest, allows different cloud providers to share resources for increased scalability and reliability. However, users and providers of cloud resources are rational and maximize their own interest when consuming and contributing shared resources. In this paper, we present a dynamic pricing scheme suitable for rational users requests containing multiple resource types. Using simulations, we compare the efficiency of our proposed strategy-proof dynamic scheme with fixed pricing, and show that user welfare and the percentage of successful requests is increased by using dynamic pricing.
The continuous increase in volume, variety and velocity of Big Data exposes datacenter resource scaling to an energy utilization problem. Traditionally, datacenters employ x86-64 (big) server nodes with power usage of tens to hundreds of Watts. But lately, low-power (small) systems originally developed for mobile devices have seen significant improvements in performance. These improvements could lead to the adoption of such small systems in servers, as announced by major industry players. In this context, we systematically conduct a performance study of Big Data execution on small nodes in comparison with traditional big nodes, and present insights that would be useful for future development. We run Hadoop MapReduce, MySQL and in-memory Shark workloads on clusters of ARM big.LITTLE boards and Intel Xeon server systems. We evaluate execution time, energy usage and total cost of running the workloads on selfhosted ARM and Xeon nodes. Our study shows that there is no one size fits all rule for judging the efficiency of executing Big Data workloads on small and big nodes. But small memory size, low memory and I/O bandwidths, and software immaturity concur in canceling the lower-power advantage of ARM servers. We show that I/O-intensive MapReduce workloads are more energy-efficient to run on Xeon nodes. In contrast, database query processing is always more energy-efficient on ARM servers, at the cost of slightly lower throughput. With minor software modifications, CPU-intensive MapReduce workloads are almost four times cheaper to execute on ARM servers.
Recent advancements in high-performance networking interconnect significantly narrow the performance gap between intra-node and inter-node communications, and open up opportunities for distributed memory platforms to enforce cache coherency among distributed nodes. To this end, we propose GAM, an efficient distributed in-memory platform that provides a directory-based cache coherence protocol over remote direct memory access (RDMA). GAM manages the free memory distributed among multiple nodes to provide a unified memory model, and supports a set of userfriendly APIs for memory operations. To remove writes from critical execution paths, GAM allows a write to be reordered with the following reads and writes, and hence enforces partial store order (PSO) memory consistency. A lightweight logging scheme is designed to provide fault tolerance in GAM. We further build a transaction engine and a distributed hash table (DHT) atop GAM to show the ease-of-use and applicability of the provided APIs. Finally, we conduct an extensive micro benchmark to evaluate the read/write/lock performance of GAM under various workloads, and a macro benchmark against the transaction engine and DHT. The results show the superior performance of GAM over existing distributed memory platforms.
In component-based simulation, models developed in different locations and for specific purposes can be selected and assembled in various combinations to meet diverse user requirements. This paper proposes CODES (COmposable Discrete-Event scalable Simulation), an approach to component-based modeling and simulation that supports model reuse across multiple application domains. A simulation component is viewed by the modeller as a black box with an in-and/or out-channel. The attributes and behavior of the component abstracted as a meta-component are described using COML (COmponent Markup Language), a markup language we propose for representing simulation components. The integrated approach, supported by a proposed COSMO (COmponent-oriented Simulation and Modeling Ontology) ontology, consists of four main steps. Component discovery returns a set of syntactically valid model components. Syntactic composability is determined by our proposed EBNF syntactic composition rules. Validation of semantic composability is performed using our proposed data and behavior alignment algorithms. The semantically valid simulation component is subsequently stored in a model repository for reuse. As proof of concept, we discuss a prototype implementation of the CODES framework using queueing system as an application domain example.
Semantic composability aims to ensure that the composition of simulation components is meaningful in terms of their expressed behavior, and achieves the desired objective of the new composed model. Validation of semantic composability is a non-trivial problem because reused simulation components are heterogeneous in nature and validation must consider various orthogonal aspects including logical, temporal and formal. In this paper, we propose a layered approach to semantic composability validation with increasing accuracy and complexity. Firstly, concurrent process validation exploits model checking for logical properties of component coordination including deadlock, safety, and liveness. Secondly, meta-simulation addresses temporal properties by validating safety and liveness of the composition through simulation time. Thirdly, perfect model validation provides a formal composition validation guarantee by determining the behavioral equivalence between the composed model and a perfect model. In contrast to state-of-the-art validation approaches, we propose time-based formalisms to describe simulation components and compare the composition behaviors through time using semantically related composition states. As proof of concept, we discuss examples of queueing networks composition and implementation using existing model checkers and constraint solvers. Lastly, we evaluate the complexity of each layer as a function of the number of components.
Emergent properties are becoming increasingly important as systems grow in size and complexity. Despite recent research interest in understanding emergent behavior, practical approaches remain a key challenge. This paper proposes an integrated approach for the identification of emergence with two perspectives. A post-mortem emergence analysis requires a-priori knowledge about emergence and can identify the causes of emergent behavior. In contrast, a live analysis, in which emergence is identified as it happens, does not require prior knowledge and relies on a more rigorous definition of individual model components in terms of what they achieve, rather than how. Our proposed approach integrates reconstructability analysis in the validation of emergence included in our proposed component-based model development life-cycle.
After the introduction in September 1991 of donor screening for hepatitis C, 95 potentially infectious blood donors who had given blood before this date were identified at the Oxford blood centre. Three hundred and ninety-nine blood components issued previously from these donors were identified in the course of the national HCV look-back programme. Of 399 questionnaires sent to hospital blood banks 392 were returned, identifying 290 recipients of whom 177 (61%) had died, and 113 (39%) were still alive 4-13 years after transfusion. One hundred and four recipients were traced and tested. Forty-nine recipients were not HCV infected. Forty-four of 58 (76%) recipients who received blood from donors found to be HCV RNA positive after September 1991 gave positive test results for HCV RNA. Eleven of 58 showed only antibody (anti-HCV), and 3/58 who had apparently received infectious blood showed no sign of past infection. The 11 who showed anti-HCV only, together with the three who showed no sign of past infection despite strong evidence of receiving HCV RNA-positive blood, had a mean age at transfusion of 27 years, compared with mean age at transfusion of 46 years in the 44 recipients with persistent HCV infection. Virus genotyping in 33/44 HCV RNA-positive recipients revealed five different genotypes. These did not seem to influence the outcome. Virus genotypes in 31 donor-recipient pairs showed complete concordance. Liver biopsies in 23/44 RNA-positive recipients showed minimal inflammation in four, mild in eight and moderate in 11. Liver fibrosis, Ishak grades 1-3, was present in 16/23 recipients. One other male recipient, not subjected to a liver biopsy, developed a hepatocellular carcinoma which caused his death at the age of 71, 8 years after transfusion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.