Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/8352
Title: Distributed multi-step Q(λ) learning for Optimal Power Flow of large-scale power grids
Authors: Yu, T
Liu, J
Chan, KW 
Wang, JJ
Keywords: Distributed Reinforcement Learning (DRL)
Multi-objective optimization
Optimal Power Flow (OPF)
Q(λ) learning
Issue Date: 2012
Publisher: Elsevier
Source: International journal of electrical power and energy systems, 2012, v. 42, no. 1, p. 614-620 How to cite?
Journal: International journal of electrical power and energy systems 
Abstract: This paper presents a novel distributed multi-step Q(λ) learning algorithm (DQ(λ)L) based on multi-agent system for solving large-scale multi-objective OPF problem. It does not require any manipulation to the conventional mathematical Optimal Power Flow (OPF) model. Large-scale power system is first partitioned to subsystems and each subsystem is managed by an agent. Each agent adopts the standard multi-step Q(λ) learning algorithm to pursue its own objectives independently and approaches to the global optimal through cooperation and coordination among agents. The proposed DQ(λ)L has been thoroughly studied and tested on the IEEE 9-bus and 118-bus systems. Case studies demonstrated that DQ(λ)L is a feasible and effective for solving multi-objective OPF problem in large-scale complex power grid.
URI: http://hdl.handle.net/10397/8352
ISSN: 0142-0615
DOI: 10.1016/j.ijepes.2012.04.062
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

14
Last Week
0
Last month
0
Citations as of Sep 7, 2017

WEB OF SCIENCETM
Citations

12
Last Week
0
Last month
1
Citations as of Sep 14, 2017

Page view(s)

26
Last Week
1
Last month
Checked on Sep 17, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.