2018
DOI: 10.3233/jcm-180792
|View full text |Cite
|
Sign up to set email alerts
|

A Sarsa-based adaptive controller for building energy conservation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…For more efficient conservation of the building energy, RL has been applied to optimize heating, ventilation, and air conditioning parameters (Yu et al., 2021). The main RL algorithms applied in building energy control are tabular Q‐learning (S. Liu & Henze, 2006; Yang et al., 2015), deep Q‐network (Ahn & Park, 2020), deep deterministic policy gradient (DDPG; Du et al., 2021), advantage actor critic (Morinibu et al., 2019), asynchronous advantage actor‐critic (Z. Zhang et al., 2019), double deep Q‐learning, and state‐action‐reward‐state‐action (Fu et al., 2018).…”
Section: Introductionmentioning
confidence: 99%
“…For more efficient conservation of the building energy, RL has been applied to optimize heating, ventilation, and air conditioning parameters (Yu et al., 2021). The main RL algorithms applied in building energy control are tabular Q‐learning (S. Liu & Henze, 2006; Yang et al., 2015), deep Q‐network (Ahn & Park, 2020), deep deterministic policy gradient (DDPG; Du et al., 2021), advantage actor critic (Morinibu et al., 2019), asynchronous advantage actor‐critic (Z. Zhang et al., 2019), double deep Q‐learning, and state‐action‐reward‐state‐action (Fu et al., 2018).…”
Section: Introductionmentioning
confidence: 99%