This paper discusses the development of a coupled Q-learning/fuzzy control algorithm to be applied to the control of solar domestic hot water systems. The controller brings the benefit of showing performance in line with the best reference controllers without the need for devoting time to modelling and simulations to tune its parameters before deployment. The performance of the proposed control algorithm was analysed in detail concerning the input membership function defining the fuzzy controller. The algorithm was compared to four standard reference control cases using three performance figures: the seasonal performance factor of the solar collectors, the seasonal performance factor of the system and the number of on/off cycles of the primary circulator. The work shows that the reinforced learning controller can find the best performing fuzzy controller within a family of controllers. It also shows how to increase the speed of the learning process by loading the controller with partial pre-existing information. The new controller performed significantly better than the best reference case with regard to the collectors’ performance factor (between 15% and 115%), and at the same time, to the number of on/off cycles of the primary circulator (1.2 per day down from 30 per day). Regarding the domestic hot water performance factor, the new controller performed about 11% worse than the best reference controller but greatly improved its on/off cycle figure (425 from 11,046). The decrease in performance was due to the choice of reward function, which was not selected for that purpose and it was blind to some of the factors influencing the system performance factor.
The paper discusses the results of a comparative analysis of the performance of different control strategies applied to a reference solar DHW system. Three classes of control strategies have been considered: so-called naïve control strategies based on the on-off control of the solar collectors pump using temperatures difference, solar irradiation or both; fuzzy-based control strategies; and a reinforced learning-based strategy coupling a Qlearning algorithm to a fuzzy controller. The performance figures used in the analysis are the seasonal performance factor at the primary side of the circuit (SPFcoll), the seasonal performance factor of the whole DHW preparation process (SPFDHW) and the number of times the circulation pump is switched on and off (NON-OFF). The analysis, carried out numerically, has been performed using the TRNSYS simulation software coupled to a LabVIEW implementation of the controllers. The analysis suggests that controllers able to find a nearly optimal policy without requiring prior modelling of the system can be implemented using a reinforced learning algorithm and supports the fact that well designed control strategies can increase significantly the performance of such systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.