An external tubular linear permanent magnet generator (ETLPMG) is proposed for direct drive wave energy conversion. ETLPMG with quasi-Halbach array is larger in the air gap magnetic flux density than the internal and radial magnetisation one. An assistant tooth and fractional slot are implemented to decrease the detent force. The power density of ETLPMG is about 7-8 times for that of the internal one and the efficiency reached 90.03% at 18 Ω condition. The characteristics of the ETLPMG are analysed with a finite element analysis. Finally, a prototype is manufactured to verify the simulation results by some experiment tests.
This paper proposed a Deep Reinforcement learning (DRL) approach for Combined Heat and Power (CHP) system economic dispatch which is suitable for different operating scenarios and can significantly decrease the computational complexity without affecting accuracy. In the respect of problem description, a vast of Combined Heat and Power (CHP) economic dispatch problems are modeled as a high-dimensional and non-smooth objective function with a large number of non-linear constraints for which powerful optimization algorithms and considerable time are required to solve it. In order to reduce the solution time, most engineering applications choose to linearize the optimization target and devices model. To avoid complicated linearization process, this paper models CHP economic dispatch problems as Markov Decision Process (MDP) that making the model highly encapsulated to preserve the input and output characteristics of various devices. Furthermore, we improve an advanced deep reinforcement learning algorithm: distributed proximal policy optimization (DPPO), to make it applicable to CHP economic dispatch problem. Based on this algorithm, the agent will be trained to explore optimal dispatch strategies for different operation scenarios and respond to system emergencies efficiently. In the utility phase, the trained agent will generate optimal control strategy in real time based on current system state. Compared with existing optimization methods, the advantages of the proposed DRL methods are mainly reflected in the following three aspects: 1) Adaptability: under the premise of the same network topology, the trained agent can handle the economic scheduling problem in various operating scenarios without recalculation. 2) High encapsulation: The user only needs to input the operating state to get the control strategy, while the optimization algorithm needs to re-write the constraints and other formulas for different situations. 3) Time scale flexibility: It can be applied to both the day-ahead optimized scheduling and the real-time control. At the same time, we give a rigorous proof that the DRL method can converge to the optimal solution. To evaluate the performance of proposed economic dispatch algorithm, comprehensive numerical analysis is conducted. The result shows that the training time of our improved algorithm is 201 seconds and 318 seconds less than other two advanced DRL algorithm. And the difference on economic performance between this method and optimization methods is only 0.029%. If the wind power of the system is 0, the trained system can still find optimal dispatch strategy without re-training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.