Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two wellknown clustering algorithms, HEED and UCR.
Nowadays, large-scale video distribution feeds a significant fraction of the global Internet traffic. However, existing content delivery networks may not be cost efficient enough to distribute adaptive video streaming, mainly due to the lack of orchestration on storage, computing and bandwidth resources. In this paper, we leverage the media cloud to deliver ondemand adaptive video streaming services, where those resources can be dynamically scheduled in an on-demand fashion. Our objective is to minimize the total operational cost by optimally orchestrating multiple resources. Specifically, we formulate an optimization problem, by examining a three-way trade-off between the caching, transcoding, and bandwidth cost, at each edge server. Then, we adopt a two-step approach to analytically derive the closed-form solution of the optimal transcoding configuration and caching space allocation, respectively, for every edge server. Finally, we verify our solution throughout extensive simulations. The results indicate that, our approach achieves significant cost savings compared with existing methods used in content delivery networks. In addition, we also find the optimal strategy and its benefits can be affected by a list of system parameters, including the unit cost of different resource, the hop distance to the origin server, the Zipf parameter of users' request patterns, and the settings of different bitrate version for one segment.
User-generated content (UGC) is emerging as one of the dominate forms in global media industry. However, the efſ-cient delivery of UGC faces with massive technical challenges due to its long-tail nature. Content delivery networks (CDN) based systems are considered as the potential solutions to deliver UGC. But none of the existing CDN based solutions can support all the required features in UGC delivery. This paper proposes contentdelivery-as-a-service (CoDaaS), an innovative idea to enable ondemand virtual content delivery service (vCDS) overlays for UGC providers to deliver their contents to a group of designated consumers. The proposed CoDaaS solution is built on a hybrid media cloud, and offers elastic private virtual content delivery service with an agreed Quality of Service (QoS) to UGC providers. In this paper, we also implement a simulation to CoDaaS. The preliminary results validate all the required features for UGC delivery and verify its comparative performance advantages. We are working on optimizing the system performance with different algorithms (e.g., collaborative caching, context-aware streaming, etc), and ultimately characterizing the fundamental trade-off between the cost and the quality-of-service in UGC delivery.
The swift adoption of cloud services is accelerating the deployment of data centers. These data centers are consuming a large amount of energy, which is expected to grow dramatically under the existing technological trends. Therefore, research efforts are in great need to architect green data centers with better energy efficiency. The most prominent approach is the consolidation enabled by virtualization. However, little effort has been paid to the potential overhead in energy usage and the throughput reduction for virtualized servers. Clear understanding of energy usage on virtualized servers lays out a solid foundation for green data center architecture. This paper investigates how virtualization affects the energy usage in servers under different task loads, aiming to understand a fundamental trade-off between the energy saving from consolidation and the detrimental effects from virtualization. We adopt an empirical approach to measure the server energy usage with different configurations, including a benchmark case and two alternative hypervisors. Based on the collected data, we report a few findings on the impact of virtualization on server energy usage and their implications to green data center architecture. We envision that these technical insights would bring tremendous value propositions to green data center architecture and operations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.