Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/29307
Title: Q-learning based dynamic optimal CPS control methodology for interconnected power systems
Authors: Yu, T
Zhou, B
Chan, KW 
Keywords: Automatic generation control
Control performance standard
Markov decision process
Optimal control
Q-learning
Issue Date: 2009
Publisher: 中国电机工程学会
Source: 中国电机工程学报 (Proceedings of the Chinese Society for Electrical Engineering), 2009, v. 29, no. 19, p. 13-19 How to cite?
Journal: 中国电机工程学报 (Proceedings of the Chinese Society for Electrical Engineering) 
Abstract: The NERC's control performance standard (CPS) based automatic generation control (AGC) problem is a stochastic multistage decision problem, which can be suitably modeled as a reinforcement learning (RL) problem based on Markov decision process (MDP) theory. The paper chose the Q-learning method as the RL algorithm regarding the CPS values as the rewards from the interconnected power systems. By regulating a closed-loop CPS control rule to maximize the total reward in the procedure of on-line learning, the optimal CPS control strategy can be gradually obtained. An applicable semi-supervisory pre-learning method was introduced to enhance the stability and convergence ability of Q-learning controllers. Two cases show that the proposed controllers can obviously enhance the robustness and adaptability of AGC systems while the CPS compliances are ensured.
URI: http://hdl.handle.net/10397/29307
ISSN: 0258-8013
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

15
Last Week
0
Last month
Citations as of Sep 16, 2017

Page view(s)

46
Last Week
1
Last month
Checked on Sep 24, 2017

Google ScholarTM

Check



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.