Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113441
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.contributorDepartment of Civil and Environmental Engineering-
dc.creatorBi, X-
dc.creatorShen, M-
dc.creatorGu, W-
dc.creatorChung, E-
dc.creatorWang, Y-
dc.date.accessioned2025-06-10T01:41:52Z-
dc.date.available2025-06-10T01:41:52Z-
dc.identifier.urihttp://hdl.handle.net/10397/113441-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication X. Bi, M. Shen, W. Gu, E. Chung and Y. Wang, "Real-Time Planning of Route, Speed, and Charging for Electric Delivery Vehicles: A Deep Reinforcement Learning Approach," in IEEE Transactions on Transportation Electrification, vol. 11, no. 2, pp. 7066-7082, April 2025 is available at https://doi.org/10.1109/TTE.2024.3523922.en_US
dc.subjectDeep reinforcement learning (DRL)en_US
dc.subjectDelivery planningen_US
dc.subjectElectric vehicle (EV)en_US
dc.subjectSpeed-varying range (SVR)en_US
dc.subjectUncertain traffic conditionen_US
dc.titleReal-time planning of route, speed, and charging for electric delivery vehicles : a deep reinforcement learning approachen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage7066-
dc.identifier.epage7082-
dc.identifier.volume11-
dc.identifier.issue2-
dc.identifier.doi10.1109/TTE.2024.3523922-
dcterms.abstractMotor vehicles typically exhibit a “speed-varying range” (SVR) characteristic. For battery-powered electric vehicles (BEVs), the range diminishes at higher speed. This characteristic greatly impacts BEV operation for demanding commercial uses like express delivery, given their limited range and long recharge times. In view of the above, this article examines a new electric vehicle routing problem (VRP) that explicitly models BEVs’ SVR and considers the joint planning of BEV route, speed, and charging under stochastic traffic conditions. A deep reinforcement learning (DRL) approach that exploits the interdependence among the above three decision aspects is then developed to generate real-time policies. Experiments on hypothetical and real-world instances showcase that the proposed approach can efficiently find high-quality policies that effectively accommodate BEVs’ SVR.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on transportation electrification, Apr. 2025, v. 11, no. 2, p. 7066-7082-
dcterms.isPartOfIEEE transactions on transportation electrification-
dcterms.issued2025-04-
dc.identifier.scopus2-s2.0-105001499286-
dc.identifier.eissn2332-7782-
dc.description.validate202506 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera3655en_US
dc.identifier.SubFormID50592en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Sichuan Science and Technology Program; Fundamental Research Funds for the Central Universities, China; UIC Start-up Research Funden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Bi_Real-time_Planning_Route.pdfPre-Published version5.45 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.