SensorNet testbeds are critical for understanding and meeting the technical challenges of wireless SensorNets. As the size and demand for these testbeds grow, resource management will become increasingly important to the effectiveness of these environments. In this paper, we argue that a microeconomic resource allocation scheme, specifically the combinatorial auction, is well suited to testbed resource management.To demonstrate this, we present the Mirage resource allocation system. In Mirage, testbed resources are allocated using a repeated combinatorial auction within a closed virtual currency environment. Users compete for testbed resources by submitting bids which specify resource combinations of interest in space/time (e.g., "any 32 MICA2 motes for 8 hours anytime in the next three days") along with a maximum value amount the user is willing to pay. A combinatorial auction is then periodically run to determine the winning bids based on supply and demand while maximizing aggregate utility delivered to users. We have implemented a fully functional and secure prototype of Mirage and have been operating it in daily use for approximately four months on Intel Research Berkeley's 148-mote SensorNet testbed.
Federated geographically-distributed computing platforms such as PlanetLab [1] and the Grid [2,3] have recently become popular for evaluating and deploying network services and scientific computations. As the size, reach, and user population of such infrastructures grow, resource discovery and resource selection become increasingly important. Although a number of resource discovery and allocation services have been built, there is little data on the utilization of the distributed computing platforms they target. Yet the design and efficacy of such services depends on the characteristics of the target platform.To inform the design and implementation of emerging resource discovery and allocation systems, we examine the usage characteristics of PlanetLab, a federated, best-effort, time-shared platform for "developing, deploying, and accessing" wide-area distributed applications [1]. In particular, we investigate variability of available host resources across nodes and over time, how that variability interacts with resource demand of several popular long-running services, and how careful application placement and migration may be used to reduce the impact of this variability. We also investigate the feasibility of using stale or predicted measurements to reduce overhead in a system that automates service placement and migration.Our study analyzes a six-month trace of node, network, and application-level measurements collected to address the questions: (i) Is informed service placement-that is, using live platform utilization data to choose where to deploy an application-beneficial? (ii) Is migration-that is, moving deployed application instances to different nodes in response to changes in resource availability-useful? (iii) Can we reduce the overhead of a service placement service by using stale or predicted data to make placement and migration decisions? (iv) What forms of correlation and prediction are not applicable?We find:• Usage of both CPU and network resources is heavy and highly variable, suggesting that shared infrastructures such as PlanetLab would benefit from a resource allocation infrastructure. Moreover, available resources across nodes and resource demands across instances of an application both vary widely. This suggests that even in the absence of a resource allocation system, some applications would benefit from intelligently mapping application instances to available nodes.• Node placement decisions can become ill-suited after about 30 minutes, suggesting that a resource discovery system should not only be able to deploy applications intelligently, but should also be capable of migrating performance-sensitive applications whose migration cost is acceptable.• Stale data, and certain types of predicted data, can be used effectively to reduce measurement overhead. For example, using resource availability and utilization data up to a half hour old to make migration decisions still enables an application's resource needs to be met more frequently than not migrating at all; this suggests that a migrati...
PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a disverse collection of links. PlanetLab allows multiple service to run concurrently and continuously, each in its own slice of PlanetLab. This paper discribes our initial implementation of PlanetLab, including the mechanisms used to impelment virtualization, and the collection of core services used to manage PlanetLab.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.