Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117415
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.contributorDepartment of Logistics and Maritime Studiesen_US
dc.creatorChen, Jen_US
dc.creatorZhang, Yen_US
dc.creatorZhou, Yen_US
dc.creatorWu, Yen_US
dc.creatorChung, Een_US
dc.creatorSartoretti, Gen_US
dc.date.accessioned2026-02-24T01:36:59Z-
dc.date.available2026-02-24T01:36:59Z-
dc.identifier.issn1524-9050en_US
dc.identifier.urihttp://hdl.handle.net/10397/117415-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2026 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication J. Chen, Y. Zhang, Y. Zhou, Y. Wu, E. Chung and G. Sartoretti, "Learning-Based Lane Selection and Driving Orders for Connected Automated Vehicles at Multi-Lane Freeway Merging Sections," in IEEE Transactions on Intelligent Transportation Systems, vol. 27, no. 3, pp. 3369-3382, March 2026 is available at https://doi.org/10.1109/TITS.2025.3643420.en_US
dc.subjectConnected automated vehiclesen_US
dc.subjectDeep reinforcement learningen_US
dc.subjectMulti-lane on-ramp mergingen_US
dc.titleLearning-based lane selection and driving orders for connected automated vehicles at multi-lane freeway merging sectionsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage3369en_US
dc.identifier.epage3382en_US
dc.identifier.volume27en_US
dc.identifier.issue3en_US
dc.identifier.doi10.1109/TITS.2025.3643420en_US
dcterms.abstractCooperative control of connected automated vehicles (CAVs) offers a promising solution for reducing traffic congestion and accidents. However, existing optimization-based and search-based methods for trajectory planning and vehicle scheduling struggle with real-time multi-vehicle control. This paper introduces a hybrid bi-level approach that nests optimization modelling within deep reinforcement learning (DRL) to jointly optimize vehicle sequences, lane selections, and trajectories, providing a rapid, safe, and high-quality solution to enhance traffic performance at multi-lane freeway merging sections. Specifically, we approach the problem of lane selection and vehicle sequencing for multiple vehicles as a multi-step decision-making process. At the upper level, we design a DRL agent with an attention-based encoder-decoder structure that auto-regressively constructs driving sequences and lane choices. It generates a probability matrix to select the next passing vehicle and target lane based on prior decisions at each step. The attention mechanism enables the centralized upper level to adapt to scenarios with varying vehicle counts without the need to retrain. At the lower level, we formulate a model predictive control (MPC) planner to generate safe trajectories. The resulting travel delay guides the upper-level DRL agent learning to maximize overall traffic efficiency. Moreover, we introduce a leader-and-lane-specific credit assignment mechanism that leverages domain knowledge to link each action with associated travel delays. This mechanism enables the agent to accurately recognize the impact of decisions on total delay, enhancing learning performance. Simulation results suggest that the proposed approach’s superior real-time performance and scalability from several to over a dozen vehicles, making it well-suited for practical automated merging tasks.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on intelligent transportation systems, Mar. 2026, v. 27, no. 3, p. 3369-3382en_US
dcterms.isPartOfIEEE transactions on intelligent transportation systemsen_US
dcterms.issued2026-03-
dc.identifier.scopus2-s2.0-105028012307-
dc.identifier.eissn1558-0016en_US
dc.description.validate202602 bcjzen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG001047/2026-02-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the General Research Fund (InCoMe) of the University Grants Committee of Hong Kong and A*STAR, CISCO Systems (USA) Pte. Ltd., under Grant 15207320; and in part by the National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory under Award I21001E0002.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Chen_Learning-based_Lane_Selection.pdfPre-Published version12.12 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.