Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116209
Title: Large language model-enhanced reinforcement learning for generic bus holding control strategies
Authors: Yu, J 
Wang, Y 
Ma, W 
Issue Date: Aug-2025
Source: Transportation research. Part E, Logistics and transportation review, Aug. 2025, v. 200, 104142
Abstract: Bus holding control is a widely-adopted strategy for maintaining stability and improving the operational efficiency of bus systems. Traditional model-based methods often face challenges with the low accuracy of bus state prediction and passenger demand estimation. In contrast, Reinforcement Learning (RL), as a data-driven approach, has demonstrated great potential in formulating bus holding strategies. RL determines the optimal control strategies in order to maximize the cumulative reward, which reflects the overall control goals. However, translating sparse and delayed control goals in real-world tasks into dense and real-time rewards for RL is challenging, normally requiring extensive manual trial-and-error. In view of this, this study introduces an automatic reward generation paradigm by leveraging the in-context learning and reasoning capabilities of Large Language Models (LLMs). This new paradigm, termed the LLM-enhanced RL, comprises several LLM-based modules: reward initializer, reward modifier, performance analyzer, and reward refiner. These modules cooperate to initialize and iteratively improve the reward function according to the feedback from training and test results for the specified RL-based task. Ineffective reward functions generated by the LLM are filtered out to ensure the stable evolution of the RL agents’ performance over iterations. To evaluate the feasibility of the proposed LLM-enhanced RL paradigm, it is applied to extensive bus holding control scenarios that vary in the number of bus lines, stops, and passenger demand. The results demonstrate the superiority, generalization capability, and robustness of the proposed paradigm compared to vanilla RL strategies, the LLM-based controller, physics-based feedback controllers, and optimization-based controllers. This study sheds light on the great potential of utilizing LLMs in various smart mobility applications.
Keywords: Bus bunching
Control strategy
Deep reinforcement learning
Dynamic holding
Large language model
Publisher: Pergamon Press
Journal: Transportation research. Part E, Logistics and transportation review 
ISSN: 1366-5545
EISSN: 1878-5794
DOI: 10.1016/j.tre.2025.104142
Appears in Collections:Journal/Magazine Article

Open Access Information
Status embargoed access
Embargo End Date 2028-08-31
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

4
Citations as of Apr 3, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.