By analysing large data-sets on jobs processed in major computing centres, we study how operations management principles apply to these modern day processing plants. We show that Little's Law on long-term performance averages holds to computing centres, i.e. work-in-progress equals throughput rate multiplied by process lead time. Contrary to traditional manufacturing principles, the law of variation does not hold to computing centres, as the more variation in job lead times the better the throughput and utilisation of the system. We also show that as the utilisation of the system increases lead times and work-in-progress increase, which complies with traditional manufacturing. In comparison with current computing centre operations these results imply that better allocation of jobs could increase throughput and utilisation, while less computing resources are needed, thus increasing the overall efficiency of the centre. From a theoretical point of view, in a system with close to zero set-up times, as in the case of computing centres, the law of variation does not hold. We observe that the more variation in job lead times and resource usage, the higher the throughput of the system.
Purpose Data centers (DCs) are similar to traditional factories in many aspects like response time constraints, limited capacity, and utilization levels. Several indicators have been developed to monitor and compare productivity in manufacturing. However, in DCs most used indicators focus on technical aspects of infrastructure, not efficiency of operations. The purpose of this paper is to rely on operations management to define a commensurate and proportionate DC performance indicator: the energy-efficient utilization indicator (EEUI). EEUI makes objective and comparative assessment of efficiency possible independently of the operating environment and its constraints. Design/methodology/approach The authors followed a design science approach, which follows the practitioner’s initial steps for finding solutions to business relevant problems prior to theory building. Therefore, this approach fits well with this research, as it is primarily motivated by business and management needs. EEUI combines both the amount of energy consumed by different components and their current energy efficiency (EE). It reaches its highest value when all server components are optimally loaded in EE sense. The authors tested EEUI by collecting data from three scientific DCs and performing controlled laboratory tests. Findings The results indicate that the optimization of EEUI makes it possible to run computing resources more efficiently. This leads to a higher EE and throughput of the DC while reducing the carbon footprint associated to DC operations. Both energy-related costs and the total cost of ownership are consequently reduced, since the amount of both energy and hardware resources needed decrease, while improving DC sustainability. Practical implications In comparison with current DC operations, the results imply that using the EEUI could help increase the EE of DCs. In order to optimize the proposed EEUIs, DC managers and operators should use resource management policies that increase the resource usage variation of the jobs being processed in the same computing resources (e.g. servers). Originality/value The paper provides a novel approach to monitor the EE at which computing resources are used. The proposed indicator not only considers the utilization levels at which server components are used but also takes into account their EE and energy proportionality.
Energy-efficiency of server hardware, web server software and databases has been widely studied. However, studies that combine these aspects are rare. In this paper the authors present an energy-efficiency evaluation of a web and database application. They concentrate on the following aspects: server BIOS and operating system energy optimization and “bursting” i.e. queuing requests and then executing them in bursts. The authors have used the bursting method with both a database application and a web plus database application. Their results indicate about 10% energy savings using this method. They analyse the model by statistical tools and present an equation to express the quality of service versus burst wait time relationship.
The energy-efficiency of server hardware, web server software, and databases has been widely studied. However, studies that combine these aspects are rare. In this chapter, the authors present an energy-efficiency evaluation of a web/database application in a Windows/IIS/MSSQL environment running on an industrial grade Intel server. Moreover, they provide a wide overview of related research and technologies. Researchers have noticed that despite energy-saving technologies, energy consumption of data centers is still growing. To resolve this dilemma, the authors explore the background and propose concrete solutions. They concentrate on the following aspects: server BIOS/operating system energy optimization (limited impact) and “bursting” (i.e., queuing requests and then executing them in bursts). The authors have used the bursting method with both database and web/database applications. Their results indicate about 10% energy savings using this method. The authors analyse the model using statistical tools and present an equation to express the quality of service vs. burst wait time relationship.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.