Executive Summary BackgroundIn June 2010, the National Renewable Energy Laboratory (NREL) completed construction on the new 220,000-square foot (ft 2 ) Research Support Facility (RSF) which included a 1,900-ft 2 data center (the RSF will expand to 360,000 ft 2 with the opening of an additional wing December, 2011). The project's request for proposals (RFP) set a whole-building demand-side energy use requirement of a nominal 35 kBtu/ft 2 per year. On-site renewable energy generation offsets the annual energy consumption. The original "legacy" data center had annual energy consumption as high as 2,394,000 kilowatt-hours (kWh), which would have exceeded the total building energy goal. As part of meeting the building energy goal, the RSF data center annual energy use had to be approximately 50% less than the legacy data center's annual energy use. This report documents the methodology used to procure, construct, and operate an energy-efficient data center suitable for a net-zeroenergy-use building. Development ProcessThe legacy data center on NREL's campus used a number of individual servers, with a utilization of less than 5%. When the total data center power draw was divided among all users, the continuous power consumption rate per person was 151 watts (W). The uninterruptible power supply (UPS) and room power distribution units were 80% efficient. Chilled water was created using one multi-stage air-cooled chiller unit and a backup single-stage air conditioning (AC) chiller unit, delivering chilled water to seven computer room air handlers (CRAHs). This cool air was delivered through an underfloor plenum, which was also a passageway for most cables, conduits, and chilled water pipes. This increased the fan energy required to move air between the CRAHs and the servers. Open hot and cold aisles added to the inefficiency of the existing data center by allowing the chilled supply air to mix with hot return air. Additionally, two walls of the data center were floor-to-ceiling exterior windows with southwestern exposure that introduced solar heat gain to the space and required additional cooling. Evaluation Approach and ResultsThe RSF data center was designed using blade servers running virtualized servers. When the total data center power draw is divided among all users, the continuous power consumption rate per person is 45 W. The UPS and room power distribution is 95% efficient. Evaporative cooling and air-side economizing is designed to cool the air to 74 o F.Cool air is supplied to the servers through dedicated underfloor and overhead plenums. Cooling efficiency is enhanced by having a contained hot aisle. This also allows waste heat from the hot aisles to be recovered and used elsewhere in the building when needed, which reduces heating loads. The new data center is mostly below grade and has no windows, helping to insulate the room from ambient outdoor conditions. ResultsAt 958,000 kWh, the RSF annual data center energy use is approximately 60% less than the legacy data center annual energy use; this results in ...
This report investigates 13 storm hardening measures for solar PV systems, summarized in Table 1. For more background on these measures, please reference Robinson (2018).
In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89°F, and 48% at outlet temperatures approaching 100°F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks. v This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. List of Acronyms AHU air handling unit BIOS basic input/output system Btu British thermal unit CDU cooling distribution unit cfm cubic feet per minute CPU central processing unit
NOTICEThis report was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or any agency thereof. Printed on paper containing at least 50% wastepaper, including 10% post consumer waste.iii AcknowledgmentsThe authors would like to thank all of the team members for their creativity, persistence and willingness to support this project. The members of the BCHA team who created the strategic vision for the project and successfully executed the demonstration include Scott Simkus, Frank Alexander, and Chuck Schloz. HB&A architects and Farnsworth group engineers worked with all of the project partners to integrate high performance building systems into an architecturally significant building design, including Steve Powell, Tino Leone, and Corey Chinn. Colorado School of Mines Geophysics Department conducted the very first EM tests in 2008 to verify a low risk of mine subsidence and prepared a critically important report for BCHA that illustrated low mine subsidence and the presence of an underground aquifer that substantiated the case to further explore GSHP for the main site. Major Geothermal provided GSHP design and modeling support. Various members of NREL's residential buildings research team provided modeling and technology selection support, including
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.