Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/112272
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.creator | Li, G | en_US |
| dc.creator | Or, SW | en_US |
| dc.date.accessioned | 2025-04-08T01:46:18Z | - |
| dc.date.available | 2025-04-08T01:46:18Z | - |
| dc.identifier.issn | 0093-9994 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/112272 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
| dc.rights | The following publication G. Li and S. W. Or, "A Multi-Task Reinforcement Learning Approach for Optimal Sizing and Energy Management of Hybrid Electric Storage Systems Under Spatio-Temporal Urban Rail Traffic," in IEEE Transactions on Industry Applications, vol. 61, no. 2, pp. 1876-1886, March-April 2025 is available at https://doi.org/10.1109/TIA.2025.3531327. | en_US |
| dc.subject | Hybrid electric storage systems (HESSs) | en_US |
| dc.subject | Multi-task learning | en_US |
| dc.subject | Optimal sizing and energy management | en_US |
| dc.subject | Reinforcement learning | en_US |
| dc.subject | Urban rail transits | en_US |
| dc.title | A multi-task reinforcement learning approach for optimal sizing and energy management of hybrid electric storage systems under spatio-temporal urban rail traffic | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 1876 | en_US |
| dc.identifier.epage | 1886 | en_US |
| dc.identifier.volume | 61 | en_US |
| dc.identifier.issue | 1 | en_US |
| dc.identifier.doi | 10.1109/TIA.2025.3531327 | en_US |
| dcterms.abstract | Passenger flow fluctuation and delay-induced traffic regulation bring considerable challenges to cost-efficient regenerative braking energy utilization of hybrid electric storage systems (HESSs) in urban rail traction networks. This paper proposes a synergistic HESS sizing and energy management optimization framework based on multi-task reinforcement learning (MTRL) for enhancing the economic operation of HESSs under dynamic spatio-temporal urban rail traffic. The configuration-specific HESS control problem under various spatio-temporal traction load distributions is formulated as a multi-task Markov decision process (MTMDP), and an iterative sizing optimization approach considering daily service patterns is devised to minimize the HESS life cycle cost (LCC). Then, a dynamic traffic model composed of a Copula-based passenger flow generation method and a real-time timetable rescheduling algorithm incorporating a traction energy-passenger-time sensitivity matrix is developed to characterize multi-train traction load uncertainty. Furthermore, an MTRL algorithm based on a dueling double deep Q network with knowledge transfer is proposed to simultaneously learn a generalized control policy from annealing task-specific agents and operation environments for solving the MTMDP effectively. Comparative studies based on a real-world subway have validated the effectiveness of the proposed framework for LCC reduction of HESS operation under urban rail traffic. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | IEEE transactions on industry applications, Mar.-Apr. 2025, v. 61, no. 2, pt. 1, p. 1876-1886 | en_US |
| dcterms.isPartOf | IEEE transactions on industry applications | en_US |
| dcterms.issued | 2025-03 | - |
| dc.identifier.eissn | 1939-9367 | en_US |
| dc.description.validate | 202504 bcch | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | a3510 | - |
| dc.identifier.SubFormID | 50278 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant No. K-BBY1 | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Li_Multi-Task_Reinforcement_Learning.pdf | Pre-Published version | 1.03 MB | Adobe PDF | View/Open |
Page views
3
Citations as of Apr 14, 2025
Downloads
3
Citations as of Apr 14, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



