Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116189
| Title: | Deep reinforcement learning approach for dynamic distribution network reconfiguration based on sequential masking | Authors: | Wang, R Bi, X Bu, S Tang, Z |
Issue Date: | Oct-2025 | Source: | IEEE transactions on neural networks and learning systems, Oct. 2025, v. 36, no. 10, p. 19270-19284 | Abstract: | Dynamic distribution network reconfiguration (DDNR) is a widely used technique for the secure and economic operation of power distribution networks (PDNs), especially in the presence of high-penetration renewable energy sources (RESs). DDNR is realized by controlling the on/off status of remotely controlled switches (RCSs) equipped at power lines in PDNs to optimize power flows. Thanks to the enhanced data availability of PDNs, data-driven solutions to DDNR, such as deep reinforcement learning (DRL), have gained growing attention recently. However, DDNR solves a sequence of combinatorial problems featuring a vast and sparse action space incurred by a so-called “radiality constraint,” which is highly challenging for DRLs to handle. Existing DRL methods are either unscalable to large-scale problems or potentially restrict optimality. Hence, we propose a sequential masking strategy to decompose its complex action space into a sequence of maskable sub-action spaces. A gated recurrent unit (GRU)-based agent and an adapted soft actor critic (SAC) algorithm are designed accordingly, producing a data-efficient, safety-guaranteed, and scalable DRL solution to the DDNR problem. Comprehensive comparisons with existing data-driven methods and model-based benchmarks are conducted via various case studies, demonstrating the advantages of the proposed method in both algorithmic performance and scalability. | Keywords: | Deep reinforcement learning (DRL) Dynamic distribution network reconfiguration (DDNR) Sequential masking Soft actor critic (SAC) |
Publisher: | Institute of Electrical and Electronics Engineers | Journal: | IEEE transactions on neural networks and learning systems | ISSN: | 2162-237X | EISSN: | 2162-2388 | DOI: | 10.1109/TNNLS.2025.3574208 | Rights: | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The following publication R. Wang, X. Bi, S. Bu and Z. Tang, 'Deep Reinforcement Learning Approach for Dynamic Distribution Network Reconfiguration Based on Sequential Masking,' in IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 10, pp. 19270-19284, Oct. 2025 is available at https://doi.org/10.1109/TNNLS.2025.3574208. |
| Appears in Collections: | Journal/Magazine Article |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Wang_Deep_Reinforcement_Learning.pdf | Pre-Published version | 4.73 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



