Building energy information systems (EIS) are a powerful customer-facing monitoring and analytical technology that can enable up to 20% site energy savings for buildings. Few technologies are as heavily marketed, but in spite of their potential, EIS remain an under-adopted emerging technology. One reason is the lack of information on purchase costs and associated energy savings. While insightful, the growing body of individual case studies has not provided industry the information needed to establish the business case for investment. Vastly different energy and economic metrics prevent generalizable conclusions. This paper addresses three common questions concerning EIS use: what are the costs, what have users saved, and which best practices drive deeper savings? We present a large-scale assessment of the value proposition for EIS use based on data from over two-dozen organizations. Participants achieved year-over-year median site and portfolio savings of 17% and 8%, respectively; they reported that this performance would not have been possible without the EIS. The median five-year cost of EIS software ownership (up-front and ongoing costs) was calculated to be $1,800 per monitoring point (kilowatt meter points were most common), with a median portfolio-wide implementation size of approximately 200 points.In this paper, we present an analysis of the relationship between key implementation factors and achieved energy reductions. Extent of efficiency projects, building energy performance prior to EIS installation, depth of metering, and duration of EIS were strongly correlated with greater savings. We also identify the best practices use of EIS associated with greater energy savings. IntroductionBuilding energy information systems (EIS) are broadly defined as the web-based analysis software, data acquisition hardware, and communication systems used to store, analyze, and display whole-building, system-level, or equipment-level energy use (Granderson et al. 2009;Motegi et al. 2003). Fig. 1 shows the schematic diagram of an EIS. At a minimum, an EIS provides hourly or sub-hourly interval meter data with graphical and analytical capabilities. The data in an EIS comes primarily from electric and gas meters, but can also include other data, such as those from building automation systems (BAS); the data integrated into the system depends on the level of monitoring that is present at the site. A data acquisition system in the building gathers the data and transmits it to a server that is on-site or on the cloud. The server storages and analyzes the data. External data sources such as weather data, or utility price and demand response information may in some cases be integrated into the EIS to support its analytical capabilities. EIS users can view the data and analysis results in graphical or report format through the user interface. A key set of EIS analytical capabilities (Granderson, Piette, and Rosenblum 2011;Kramer et al. 2013) include:
Fault detection and diagnosis (FDD) represents one of the most active areas of research and commercial product development in the buildings industry. This paper addresses two questions concerning FDD implementation and advancement 1) What are today's users of FDD saving and spending on the technology? 2) What methods and datasets can be used to evaluate and benchmark FDD algorithm performance? Relevant to the first question, 26 organizations that use FDD across a total 550 buildings and 97M sf achieved median savings of 8%. Twenty-seven FDD users reported that the median base cost for FDD software, annual recurring software cost, and annual labor cost were $8, $2.7 and $8 per monitoring point, with a median implementation size of approximately 1300 points. To address the second question, this paper describes a systematic methodology for evaluating the performance of FDD algorithms, curates an initial test dataset of air handling unit (AHU) system faults, and completes a trial to demonstrate the evaluation process on three sample FDD algorithms. The work provided a first step toward a standard evaluation of different FDD technologies. It showed the test methodology is indeed scalable and repeatable, provided an understanding of the types of insights that can be gained from algorithm performance testing, and highlighted the priorities for further expanding the test dataset.
It is estimated that approximately 4-5% of national energy consumption can be saved through corrections to existing commercial building controls infrastructure and resulting improvements to efficiency. Correspondingly, automated fault detection and diagnostics (FDD) algorithms are designed to identify the presence of operational faults and their root causes. A diversity of techniques is used for FDD spanning physical models, black box, and rule-based approaches. A persistent challenge has been the lack of common datasets and test methods to benchmark their performance accuracy. This article presents a first of its kind public dataset with ground-truth data on the presence and absence of building faults. This dataset spans a range of seasons and operational conditions and encompasses multiple building system types. It contains information on fault severity, as well as data points reflective of the measurements in building control systems that FDD algorithms typically have access to. The data were created using simulation models as well as experimental test facilities, and will be expanded over time.
Fault detection and diagnosis (FDD) algorithms for building systems and equipment represent one of the most active areas of research and commercial product development in the buildings industry. However, far more e↵ort has gone into developing these algorithms than into assessing their performance. As a result, considerable uncertainties remain regarding the accuracy and e↵ectiveness of both research-grade FDD algorithms and commercial products-a state of a↵airs that has hindered the broad adoption of FDD tools. This article presents a general, systematic framework for evaluating the performance of FDD algorithms. The article focuses on understanding the possible answers to two key questions: in the context of FDD algorithm evaluation, what defines a fault and what defines an evaluation input sample? The answers to these questions, together with appropriate performance metrics, may be used to fully specify evaluation procedures for FDD algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.