DFuse is an architectural framework for dynamic application-specified data fusion in sensor networks. It bridges an important abstraction gap for developing advanced fusion applications that takes into account the dynamic nature of applications and sensor networks. Elements of the DFuse architecture include a fusion API, a distributed role assignment algorithm that dynamically adapts the placement of the application task graph on the network, and an abstraction migration facility that aids such dynamic role assignment. Experimental evaluations show that the API has low overhead, and simulation results show that the role assignment algorithm significantly increases the network lifetime over static placement.
This paper surveys a variety of subsystems designed to be the building blocks from which sophisticated infrastructures for ubiquitous computing are assembled. Our experience shows that many of these building blocks fit neatly into one of five categories, each containing functionally-equivalent components. Effectively identifying the best-fit "lego pieces", which in turn determines the composite functionality of the resulting infrastructure, is critical. The selection process, however, is impeded by the lack of convention for labeling these classes of building blocks. The lack of clarity with respect to what ready-made subsystems are available within each class often results in naive re-implementation of ready-made components, monolithic and clumsy implementations, and implementations that impose non-standard interfaces onto the applications above. This paper explores each class of subsystems in light of the experience gained over two years of active development of both ubiquitous computing applications and software infrastructures for their deployment.
Scheduling a streaming application on high-performance computing (HPC) resources has to be sensitive to the computation and communication needs of each stage of the application dataflow graph to ensure QoS criteria such as latency and throughput. Since the grid has evolved out of traditional high-performance computing, the tools available for scheduling are more appropriate for batch-oriented applications. Our scheduler, called Streamline, considers the dynamic nature of the grid and runs periodically to adapt scheduling decisions using application requirements (perstage computation and communication needs), application constraints (such as co-location of stages), and resource availability. The performance of Streamline is compared with an Optimal placement, Simulated Annealing (SA) approximations, and E-Condor, a streaming grid scheduler built using Condor. For kernels of streaming applications, we show that Streamline performs close to the Optimal and SA algorithms, and an order of magnitude better than E-Condor under nonuniform load conditions. We also conduct scalability studies An earlier version of this paper appeared in [1]. This paper includes Sect. 6 on experiments using wide area environment. We describe our experience implementing Streamline scheduler as a grid service on Planetlab. We also present our experimental results on Planetlab in Sect. 6. We update related work in Sect. 7.showing the advantage of Streamline over other approaches. Furthermore, we implement Streamline on Planetlab as a grid service and demonstrate that it performs close to SA algorithm under dynamic resource conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.