Please use this identifier to cite or link to this item:
Title: Q-learning based dynamic optimal CPS control methodology for interconnected power systems
Authors: Yu, T
Zhou, B
Chan, KW 
Keywords: Automatic generation control
Control performance standard
Markov decision process
Optimal control
Issue Date: 2009
Publisher: 中国电机工程学会
Source: 中国电机工程学报 (Proceedings of the Chinese Society for Electrical Engineering), 2009, v. 29, no. 19, p. 13-19 How to cite?
Journal: 中国电机工程学报 (Proceedings of the Chinese Society for Electrical Engineering) 
Abstract: The NERC's control performance standard (CPS) based automatic generation control (AGC) problem is a stochastic multistage decision problem, which can be suitably modeled as a reinforcement learning (RL) problem based on Markov decision process (MDP) theory. The paper chose the Q-learning method as the RL algorithm regarding the CPS values as the rewards from the interconnected power systems. By regulating a closed-loop CPS control rule to maximize the total reward in the procedure of on-line learning, the optimal CPS control strategy can be gradually obtained. An applicable semi-supervisory pre-learning method was introduced to enhance the stability and convergence ability of Q-learning controllers. Two cases show that the proposed controllers can obviously enhance the robustness and adaptability of AGC systems while the CPS compliances are ensured.
ISSN: 0258-8013
Appears in Collections:Journal/Magazine Article

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Oct 11, 2018

Page view(s)

Last Week
Last month
Citations as of Oct 14, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.