Abstract-Much recent research has been devoted to investigating algorithms for allocating virtual machines (VMs) to physical machines (PMs) in infrastructure clouds. Many such algorithms address distinct problems, such as initial placement, consolidation, or tradeoffs between honoring service-level agreements and constraining provider operating costs. Even where similar problems are addressed, each individual research team evaluates proposed algorithms under distinct conditions, using various techniques, often targeted to a small collection of VMs and PMs. In this paper, we describe an objective method that can be used to compare VMplacement algorithms in large clouds, covering tens of thousands of PMs and hundreds of thousands of VMs. We demonstrate our method by comparing 18 algorithms for initial VM placement in on-demand infrastructure clouds. We compare algorithms inspired by open-source code for infrastructure clouds, and by the online bin-packing literature.
Many natural and man-made systems exhibit self-organization, where interactions among components lead to system-wide patterns of behavior. This paper first introduces current, scientific understanding of self-organizing systems and then identifies the main models investigated by computer scientists seeking to apply self-organization to design large, distributed systems. Subsequently, the paper surveys research that uses models of self-organization in wireless sensor networks to provide a variety of functions: sharing processing and communication capacity; forming and maintaining structures; conserving power; synchronizing time; configuring software components; adapting behavior associated with routing, with disseminating and querying for information, and with allocating tasks; and providing resilience by repairing faults and resisting attacks. The paper closes with a summary of open issues that must be addressed before self-organization can be applied routinely during design and deployment of senor networks and other distributed, computer systems.
Networking engineers increasingly depend on simulation to design and deploy complex, heterogeneous networks. Similarly, networking researchers increasingly depend on simulation to investigate the behavior and performance of new protocol designs.Despite such widespread use of simulation, today there exists little common understanding of the degree of validation required for various applications of simulation. Further, only limited knowledge exists regarding the effectiveness of known validation techniques.To investi ate these issues, in Ma 1999 DARPA and NlST organized a workshop on Netwoi Simulation Validation. Tlis article re orts on discussions and consensus about issues that arose at the workshop. W e descrire best current practices for validating simulations and for validating TCP models across various simulation environments. W e also discuss interactions between scale and model validation and future challenges for the community.etworks continue to grow more complex as industry deploys a mix of wired and wireless technologies into large-scale heterogeneous network architectures and as user applications and traffic continue to evolve. For example, increased complexity already affects Department of Defense combat networks, the Internet, and industrial wireless networks. Faced with such growing complexity, network designers and researchers almost universally use simulation in order to predict the expected performance of complex networks and to understand the behavior of existing network protocols not originally designed to operate in today's networks. Simulation is also increasingly used to predict the correctness and performance of new protocol designs. In addition, the use of simulations now appears as a strict requirement in processes leading to international standards, such as the IMT-2000 standard for third-generation, wireless, cellular telephony.This growing reliance on simulation raises the stakes with regard to establishing the correctness and predictive merits of specific simulation models. Yet no widely accepted practices and techniques exist to help validate network simulations and to evaluate the trustworthiness of their results. Early work in networking research and engineering involved both experimentation and mathematical modeling to prove feasibility and to establish bounds on expected performance. In the past ten years, as networks have grown too large to allow easy experimentation and too complicated to admit easy tractable mathematical analysis, network simulation' has filled an increasingly important role, helping researchers and designers to under-' Of course, modem simulation models often also include analytical submodels. Such hybrid models can be more effective than either simulation or analysis alone. stand the behavior and performance of protocols and networks. Today simulation is often used:To predict the performance of current networks and protocols in order to aid technology assessment and capacity planning and to demonstrate fulfillment of customer goals.To predict the expected be...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.