On-line services are making increasing use of dynamically generated Web content. Serving dynamic content is more complex than serving static content. Besides a Web server, it typically involves a server-side application and a database to generate and store the dynamic content. A number of standard mechanisms have evolved to generate dynamic content. We evaluate three specific mechanisms in common use: PHP, Java servlets, and Enterprise Java Beans (EJB). These mechanisms represent three different architectures for generating dynamic content. PHP scripts are tied to the Web server and require writing explicit database queries. Java servlets execute in a different process from the Web server, allowing them to be located on a separate machine for better load balancing. The database queries are written explicitly, as in PHP, but in certain circumstances the Java synchronization primitives can be used to perform locking, reducing database lock contention and the amount of communication between servlets and the database. Enterprise Java Beans (EJB) provide several services and facilities. In particular, many of the database queries can be generated automatically. We measure the performance of these three architectures using two application benchmarks: an online bookstore and an auction site. These benchmarks represent common applications for dynamic content and stress different parts of a dynamic content Web server. The auction site stresses the server front-end, while the online bookstore stresses the server back-end. For all measurements, we use widely available open-source software (the Apache Web server, Tomcat servlet engine, JOnAS EJB server, and MySQL relational database). While Java servlets are less efficient than PHP, their ability to execute on a different machine from the Web server and their ability to perform synchronization leads to better performance when the front-end is the bottleneck or when there is database lock contention. EJB facilities and services come at the cost of lower performance than both PHP and Java servlets.
Causeway provides runtime support for the development of distributed meta-applications. These meta-applications control or analyze the behavior of multi-tier distributed applications such as multi-tier web sites or web services. Examples of meta-applications include multitier debugging, fault diagnosis, resource tracking, prioritization, and security enforcement. Efficient online implementation of these meta-applications requires meta-data to be passed between the different program components. Examples of metadata corresponding to the above meta-applications are request identifiers, priorities or security principal identifiers. Causeway provides the infrastructure for injecting, destroying, reading, and writing such metadata. The key functionality in Causeway is forwarding the metadata associated with a request at so-called transfer points, where the execution of that request gets passed from one component to another. This is done automatically for system-visible channels, such as pipes or sockets. An API is provided to implement the forwarding of metadata at system-opaque channels such as shared memory. We describe the design and implementation of Causeway, and we evaluate its usability and performance. Causeway's low overhead allows it to be present permanently in production systems. We demonstrate its usability by showing how to implement, in 150 lines of code and without modification to the application, global priority enforcement in a multitier dynamic web server.
Software-defined networking (SDN) is a well-known example of a research idea that has been reduced to practice in numerous settings. Network virtualization has been successfully developed commercially using SDN techniques. This paper describes our experience in developing production-ready, multi-vendor implementations of a complex network virtualization system. Having struggled with a traditional network protocol approach (based on OpenFlow) to achieving interoperability among vendors, we adopted a new approach. We focused first on defining the control information content and then used a generic database protocol to synchronize state between the elements. Within less than nine months of starting the design, we had achieved basic interoperability between our network virtualization controller and the hardware switches of six vendors. This was a qualitative improvement on our decidedly mixed experience using OpenFlow. We found a number of benefits to the database approach, such as speed of implementation, greater hardware diversity, the ability to abstract away implementation details of the hardware, clarified state consistency model, and extensibility of the overall system.
This paper is concerned with performance debugging of multi-tier applications, such as commonly found in servers and dynamic-content web sites. Existing tools and techniques for profiling such applications are not general enough to track and profile transactions in a generic multi-tier application. We propose transactional profiling that provides a general solution to this problem. We provide novel algorithms and techniques to track and profile transactions that flow through shared memory, events, stages or via interprocess communication using messages. We also measure interference among concurrent transactions. We describe the design and implementation of Whodunit, our prototype transactional profiler. We demonstrate the correctness of our proposed algorithm for tracking transaction flow through shared memory using Apache and MySQL. Using Whodunit we are able to track and profile transactions that flow through shared memory, events, stages or via message passing, and measure the interference among concurrent transactions. We illustrate the use of Whodunit in obtaining the transactional profile of web servers, a web proxy cache and a bookstore application.
The library is one of the essential place in any academic institution. With the invention of ICT (Information and Communications Technology), every sector now a day are trying to automate their system and library is also one of them. We can find that the technologies have taken over and helped the library in various ways. Now a day's libraries need new tools that will allow them to increase their productivity and improve user service without adding personnel. Application of Barcode technology in libraries is a way to process client requests from fast to fastest. Barcode technology is mostly used in the circulation system of a library and most successful due to its speed, accuracy and reliability. Barcoding though comparatively an old technology is one of the significant steps in library automation and is still not widely used in libraries. This article briefly discusses the description of the application of Barcode technology in libraries, its working mechanism and also the advantages and disadvantages of this technology. The researcher has tried to explore the technical aspects of Barcode technology in the library. The main aim of the article is to create awareness among the librarians to use Barcode technology in the library. Barcode is very cost-effective technology can be used by every library. The paper also discusses the creation of Barcode with Glabels open-source software which is absolutely free of cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.