Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only
We present the results of a systematic literature review that examines the main paradigms and properties of programming languages developed for and used in High Performance Computing for Big Data processing. The systematic literature review is based on a combination of automated keyword-based search in the Elsevier Science Direct database and further digital databases for articles published in international peer-reviewed journals and conferences, leading to an initial sample of 420 articles, which was then narrowed down in a second phase to 152 articles found relevant and published 2006-2018. The manual analysis of these articles allowed us to identify 26 languages used in 33 of these articles for HPC for Big Data processing. We analyzed the languages and their usage in these articles by 22 criteria and summarize the results in this article. We evaluate the outcomes of the literature review by comparing them with opinions of domain experts. Our results indicate that, for instance, the majority of the used HPC languages in the context of Big Data are text-based general-purpose programming languages and target the end-user community.
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads' execution time is also improved, by incorporating an SLA-based redistribution of their VM's computational power.
Abstract-Citizens develop Wireless Mesh Networks (WMN) in many areas as an alternative or their only way for local interconnection and access to the Internet. This access is often achieved through the use of several shared web proxy gateways. These network infrastructures consist of heterogeneous technologies and combine diverse routing protocols. Network-aware stateof-art proxy selection schemes for WMNs do not work in this heterogeneous environment. We developed a client-side gateway selection mechanism that optimizes the client-gateway selection, agnostic to underlying infrastructure and protocols, requiring no modification of proxies nor the underlying network. The choice is sensitive to network congestion and proxy load, without requiring a minimum number of participating nodes. Extended Vivaldi network coordinates are used to estimate client-proxy network performance. The load of each proxy is estimated passively by collecting the Time-to-First-Byte of HTTP requests, and shared across clients. Our proposal was evaluated experimentally with clients and proxies deployed in guifi.net, the largest community wireless network in the world. Our selection mechanism avoids proxies with heavy load and slow internal network paths, with overhead linear to the number of clients and proxies.
Abstract-Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. To reduce the complexity of service deployment, community micro-clouds have recently emerged as a promising enabler for the delivery of cloud services to community users. By putting services closer to consumers, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of the services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, it requires of a "careful" placement of micro-clouds and services over the network.To achieve this, this paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is vital to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 211% in terms of bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the loss packet rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic.
Abstract. Today we are increasingly more dependent on critical data stored in cloud data centers across the world. To deliver high-availability and augmented performance, different replication schemes are used to maintain consistency among replicas. With classical consistency models, performance is necessarily degraded, and thus most highly-scalable cloud data centers sacrifice to some extent consistency in exchange of lower latencies to end-users. More so, those cloud systems blindly allow stale data to exist for some constant period of time and disregard the semantics and importance data might have, which undoubtedly can be used to gear consistency more wisely, combining stronger and weaker levels of consistency. To tackle this inherent and well-studied trade-off between availability and consistency, we propose the use of V F C 3 , a novel consistency model for replicated data across data centers with framework and library support to enforce increasing degrees of consistency for different types of data (based on their semantics). It targets cloud tabular data stores, offering rationalization of resources (especially bandwidth) and improvement of QoS (performance, latency and availability), by providing strong consistency where it matters most and relaxing on less critical classes or items of data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.