AbstractÐThe Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and marketing a myriad of services. The World Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging e-service infrastructure with increasing requirements for service quality and reliability guarantees in an unpredictable and highly-dynamic environment. This paper describes performance control of a Web server using classical feedback control theory. We use feedback control theory to achieve overload protection, performance guarantees, and service differentiation in the presence of load unpredictability. We show that feedback control theory offers a promising analytic foundation for providing service differentiation and performance guarantees. We demonstrate how a general Web server may be modeled for purposes of performance control, present the equivalents of sensors and actuators, formulate a simple feedback loop, describe how it can leverage on real-time scheduling and feedback-control theories to achieve per-class response-time and throughput guarantees, and evaluate the efficacy of the scheme on an experimental testbed using the most popular Web server, Apache. Experimental results indicate that controltheoretic techniques offer a sound way of achieving desired performance in performance-critical Internet applications. Our QoS (Quality-of-Service) management solutions can be implemented either in middleware that is transparent to the server, or as a library called by server code.
Quality of Service, user perception, eCommerce, server design, internet, web server design As the number of Web users and the diversity of Web applications continues to explode, Web Quality of Service (QoS) is an increasingly critical issue in the domain of e-Commerce. This paper presents experiments designed to estimate users' tolerance of QoS in the context of e-commerce. In addition to objective measures we discuss contextual factors that influence these thresholds and show how users' conceptual models of Web tasks affect their expectations. We then show how user thresholds of tolerance can be taken into account when designing web servers. This integration of user requirements for QoS into systems design is ultimately of benefit to all stakeholders in the design of Internet services. AbstractAs the number of Web users and the diversity of Web applications continues to explode, Web Quality of Service (QoS) is an increasingly critical issue in the domain of e-Commerce [re]. This paper presents experiments designed to estimate users' tolerance of QoS in the context of e-commerce. In addition to objective measures we discuss contextual factors that influence these thresholds and show how users' conceptual models of Web tasks affect their expectations. We then show how user thresholds of tolerance can be taken into account when designing web servers. This integration of user requirements for QoS into systems design is ultimately of benefit to all stakeholders in the design of Internet services.
web server, QoSThe evolving needs of conducting commerce using the Internet requires more than just network quality of service (QoS) mechanisms for differentiated services. AbstractThe evolving needs of conducting commerce using the Internet requires more than just network quality of service (QoS) mechanisms for differentiated services. Empirical evidence suggests that overloaded servers can have significant impact on user perceived response times. Furthermore, FIFO scheduling done by servers can eliminate any QoS improvements made by network differentiated services. Consequently, Server QoS is a key component in delivering end to end predictable, stable, and tiered services to end users. This paper describes our research and results for WebQoS an architecture for supporting server QoS. We demonstrate that through classification, admission control and scheduling we can support distinct performance levels for different classes of users and maintain predictable performance even when the server is subjected to a client request rate that is several times greater than the server's maximum processing rate.
The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and selling a myriad of emerging services. The World Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the web server at the center of a gradually emerging e-service infrastructure with increasing requirements for service quality, reliability, and security guarantees in an unpredictable and highly dynamic environment. Towards that end, we introduce a web server QoS provisioning architecture for performance differentiation among classes of clients, performance isolation among independent services, and capacity planning to provide QoS guarantees on request rate and delivered bandwidth. We present a new approach to web server resource management based on web content adaptation. This approach subsumes traditional admission control based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements. Our QoS management solutions can be implemented either in middleware transparent to the server or by direct modification of the server software. We present experimental data to illustrate the practicality of our approach.
Communication-oriented abstractions such as atomic multicast, group RPC, and protocols for location-independent mobile computing can simplify the development of complex applications built on distributed systems. This article describes Coyote, a system that supports the construction of highly modular and configurable versions of such abstractions. Coyote extends the notion of protocol objects and hierarchical composition found in existing systems with support for finer-grain microprotocol objects and a nonhierarchical composition scheme for use within a single layer of a protocol stack. A customized service is constructed by selecting microprotocols based on their semantic guarantees and configuring them together with a standard runtime system to form a composite protocol implementing the service. This composite protocol is then composed hierarchically with other protocols to form a complete network subsystem. The overall approach is described and illustrated with examples of services that have been constructed using Coyote, including atomic multicast, group RPC, membership, and mobile computing protocols. A prototype implementation based on extending x-kernel version 3.2 running on Mach 3.0 with support for microprotocols is also presented, together with performance results from a suite of microprotocols from which over 60 variants of group RPC can be constructed.
Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility-participants may move around the shared space-and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system-cameras, image processing, networking, and display-contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance-its measurement, enhancement, and control.
Images of a scene captured with multiple cameras will have different color values because of variations in color rendering across devices. We present a method to accurately retrieve color information from uncalibrated images taken under uncontrolled lighting conditions with an unknown device and no access to raw data, but with a limited number of reference colors in the scene. The method is used to assess skin tones. A subject is imaged with a calibration target. The target is extracted and its color values are used to compute a color correction transform that is applied to the entire image. We establish that the best mapping is done using a target consisting of skin colored patches representing the whole range of human skin colors. We show that color information extracted from images is well correlated with color data derived from spectral measurements of skin. We also show that skin color can be consistently measured across cameras with different color rendering and resolutions ranging from 0.1 to 4.0 megapixels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.