Abstract:The overwhelming popularity of Internet and the technology advancements have determined the diffusion of many different Web-enabled devices. In such an heterogeneous client environment, efficient content adaptation and delivery services are becoming a major requirement for the new Internet service infrastructure. In this paper we describe intermediary-based architectures that provide adaptation and delivery of Web content to different user terminals. We present the design of a Squid-based prototype that carrie… Show more
“…For the current scenario, the service times to dynamically generate a textbased resource follow an empirical distribution obtained by preliminary experiments, with a median of 220 ms. For multimedia resources, the service times are based on [3]. Since the time to adapt a multimedia resource is proportional to the resource size [6], [13], we consider a service time for MB, that is 730 ms for images and 1054 ms for audio/video. To model future server infrastructures with more powerful CPU, we assume that the server computational power will continue to increase according to the Moore Law for the next five years.…”
Section: B Performance Evaluationmentioning
confidence: 99%
“…These approaches may be integrated with caching strategies [22], [6], [13] that typically exploit a sort of utility function to determine whether or not it is convenient to cache an adapted version of a certain resource. However, all these proposals consider a traditional Web scenario, with a limited amount of multimedia resources and a small fraction of requests that come from mobile devices and, consequently, require adaptation.…”
Section: B Performance Evaluationmentioning
confidence: 99%
“…For example, photo and video sharing services (e.g., YouTube, Flickr) are causing an explosion of demand for multimedia content. These trends will determine a future mobile Web scenario characterized by a large amount of heterogeneous contents, mainly consisting of multimedia resources (e.g., [1], [2], [3]), that will have to be tailored to user preferences and to device capabilities on-thefly at the moment of the client request [4], [5], [6]. Designing server architectures that will support future Mobile Web-based services requires an initial evaluation of the workload and computational impact.…”
Abstract-The great diffusion of Mobile Web-enabled devices allows the implementation of novel personalization, location and adaptation services that will place unprecedented strains on the server infrastructure of the content provider. This paper has a twofold contribution. First, we analyze the five-years trend of Mobile Web-based applications in terms of workload characteristics of the most popular services and their impact on the server infrastructures. As the technological improvements at the server level in the same period of time are insufficient to face the computational requirements of the future Mobile Web-based services, we propose and evaluate adequate resource management strategies. We demonstrate that pre-adaptating a small fraction of the most popular resources can reduce the response time up to one third thus facing the increased computational impact of the future Mobile Web services.
“…For the current scenario, the service times to dynamically generate a textbased resource follow an empirical distribution obtained by preliminary experiments, with a median of 220 ms. For multimedia resources, the service times are based on [3]. Since the time to adapt a multimedia resource is proportional to the resource size [6], [13], we consider a service time for MB, that is 730 ms for images and 1054 ms for audio/video. To model future server infrastructures with more powerful CPU, we assume that the server computational power will continue to increase according to the Moore Law for the next five years.…”
Section: B Performance Evaluationmentioning
confidence: 99%
“…These approaches may be integrated with caching strategies [22], [6], [13] that typically exploit a sort of utility function to determine whether or not it is convenient to cache an adapted version of a certain resource. However, all these proposals consider a traditional Web scenario, with a limited amount of multimedia resources and a small fraction of requests that come from mobile devices and, consequently, require adaptation.…”
Section: B Performance Evaluationmentioning
confidence: 99%
“…For example, photo and video sharing services (e.g., YouTube, Flickr) are causing an explosion of demand for multimedia content. These trends will determine a future mobile Web scenario characterized by a large amount of heterogeneous contents, mainly consisting of multimedia resources (e.g., [1], [2], [3]), that will have to be tailored to user preferences and to device capabilities on-thefly at the moment of the client request [4], [5], [6]. Designing server architectures that will support future Mobile Web-based services requires an initial evaluation of the workload and computational impact.…”
Abstract-The great diffusion of Mobile Web-enabled devices allows the implementation of novel personalization, location and adaptation services that will place unprecedented strains on the server infrastructure of the content provider. This paper has a twofold contribution. First, we analyze the five-years trend of Mobile Web-based applications in terms of workload characteristics of the most popular services and their impact on the server infrastructures. As the technological improvements at the server level in the same period of time are insufficient to face the computational requirements of the future Mobile Web-based services, we propose and evaluate adequate resource management strategies. We demonstrate that pre-adaptating a small fraction of the most popular resources can reduce the response time up to one third thus facing the increased computational impact of the future Mobile Web services.
“…For these reasons, supporting efficient content adaptation and delivery services is a complex task that requires intervention at the software level, modification of existing protocols, and even accurate design of the server infrastructures that represent the focus of this paper. Any content provider should address the high computational cost of the adaptation and the augmented storage requirements due to the presence of multiple resource versions, hence the most practicable solution is to rely on intermediary architectures consisting of multiple geographically distributed servers interposed in the path from the client to the origin server [5,17,23,27]. This type of architectures opens the possibility of sharing the load of computationally expensive services, increases the storage capacity and improves content delivery.…”
Section: Introductionmentioning
confidence: 99%
“…The motivation is that content partitioning has the effect of preserving request access locality and avoiding content replication. The proposed architecture is innovative because it addresses present and future issues for scalability and performance that are not solved by available solutions based on geographically distributed architectures [5]. We compare the Two-level architecture with alternative flat architectures through several experiments based on prototype systems.…”
The growing demand for Web and multimedia content accessed through heterogeneous devices requires the providers to tailor resources to the device capabilities on-the-fly. Providing services for content adaptation and delivery opens two novel challenges to the present and future content provider architectures: content adaptation services are computationally expensive; the global storage requirements increase because multiple versions of the same resource may be generated for different client devices. We propose a novel two-level distributed architecture for the support of efficient content adaptation and delivery services. The nodes of the architecture are organized in two levels: thin edge nodes on the first level act as simple request gateways towards the nodes of the second level; fat interior clusters perform all the other tasks, such as content adaptation, caching and fetching. Several experimental results show that the Twolevel architecture achieves better performance and scalability than that of existing flat or no cooperative architectures.
Computer networks have two important characteristics: the vast diversity of connecting devices and a great variability of the physical distribution of equipments. Therefore, the performance analysis of a specific network based on absolute references or third parties may not be applicable in all circumstances, especially in highly complex and heterogeneous networks. Indeed, it carries a high degree of uncertainty, and the classical logic may not be appropriate to deal it. This paper aims to parameterize and evaluate the operating elements of heterogeneous networks, from the analysis of representative attributes, based on concepts of Paraconsistent Annotated Evidential Logic E蟿 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.