2018
DOI: 10.1016/j.energy.2017.12.019
|View full text |Cite
|
Sign up to set email alerts
|

Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems

Abstract: Energy consumption for hot water production is a major draw in high efficiency buildings. Optimizing this has typically been approached from a thermodynamics perspective, decoupled from occupant influence. Furthermore, optimization usually presupposes existence of a detailed dynamics model for the hot water system. These assumptions lead to suboptimal energy efficiency in the real world. In this paper, we present a novel reinforcement learning based methodology which optimizes hot water production. The propose… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
37
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 81 publications
(38 citation statements)
references
References 27 publications
1
37
0
Order By: Relevance
“…By combining Equation (22)(23)(24), temperature at the inside top of the tank T t,top can be solved:…”
Section: Heat Loss From the Bottom Of The Tank Can Be Calculated By Tmentioning
confidence: 99%
See 2 more Smart Citations
“…By combining Equation (22)(23)(24), temperature at the inside top of the tank T t,top can be solved:…”
Section: Heat Loss From the Bottom Of The Tank Can Be Calculated By Tmentioning
confidence: 99%
“…22 Occupant behavior has a significant impact on the results of future energy demand prediction and has become an important aspect in reducing thermal losses from energy inefficient buildings. 23 Kazmi et al 13 further indicated that obtaining a better understanding of occupant behavior only increases in importance as a necessary first step. However, for most data-driven predictive methods, a prediction is usually conducted on an aggregated energy consumption.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the RL approach allows, using past experience and actual exploration, to account for the non-stationary features of the physical environment related to seasonal changes and natural degradation. An example application of RL techniques can be found in Reference [51], where the authors present a model-based RL algorithm to optimize the energy efficiency of hot water production systems. The authors make use of an ensemble of deep neural networks to approximate the transition function and get estimates of the current state uncertainty.…”
Section: Applications Of Reinforced Learning To Energy Systemsmentioning
confidence: 99%
“…This challenge arises when controlling a large cluster of residential flexibility assets, for example, domestic hot water heaters (DHWHs). These loads are a high potential source of flexibility for demand response programs [2], [5], driven by their their decentralized abundance [6], considerable and efficient storage capacity [7] and its negligible inertia. Algorithms that tap into this resource must take into account a variety of factors, including: (1) intertemporal energy constraints, (2) the intrinsic uncertain user behavior, (3) methods to update models and control strategies as new information is collected and (4) the computational challenges associated with managing thousands to millions of devices to perform system-scale services.…”
Section: Introductionmentioning
confidence: 99%