Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/97450
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Civil and Environmental Engineering | en_US |
| dc.creator | Liang, E | en_US |
| dc.creator | Wen, K | en_US |
| dc.creator | Lam, WHK | en_US |
| dc.creator | Sumalee, A | en_US |
| dc.creator | Zhong, R | en_US |
| dc.date.accessioned | 2023-03-06T01:18:36Z | - |
| dc.date.available | 2023-03-06T01:18:36Z | - |
| dc.identifier.issn | 2162-237X | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/97450 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
| dc.rights | The following publication Liang, E., Wen, K., Lam, W. H., Sumalee, A., & Zhong, R. (2021). An integrated reinforcement learning and centralized programming approach for online taxi dispatching. IEEE Transactions on Neural Networks and Learning Systems, 33(9), 4742-4756 is available at https://doi.org/10.1109/TNNLS.2021.3060187. | en_US |
| dc.subject | Deep reinforcement learning (RL) | en_US |
| dc.subject | Multiagent system | en_US |
| dc.subject | Online vehicle routing | en_US |
| dc.subject | Stochastic network traffic | en_US |
| dc.subject | Vehicle dispatching | en_US |
| dc.title | An integrated reinforcement learning and centralized programming approach for online taxi dispatching | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 4742 | en_US |
| dc.identifier.epage | 4756 | en_US |
| dc.identifier.volume | 33 | en_US |
| dc.identifier.issue | 9 | en_US |
| dc.identifier.doi | 10.1109/TNNLS.2021.3060187 | en_US |
| dcterms.abstract | Balancing the supply and demand for ride-sourcing companies is a challenging issue, especially with real-time requests and stochastic traffic conditions of large-scale congested road networks. To tackle this challenge, this article proposes a robust and scalable approach that integrates reinforcement learning (RL) and a centralized programming (CP) structure to promote real-time taxi operations. Both real-time order matching decisions and vehicle relocation decisions at the microscopic network scale are integrated within a Markov decision process framework. The RL component learns the decomposed state-value function, which represents the taxi drivers' experience, the off-line historical demand pattern, and the traffic network congestion. The CP component plans nonmyopic decisions for drivers collectively under the prescribed system constraints to explicitly realize cooperation. Furthermore, to circumvent sparse reward and sample imbalance problems over the microscopic road network, this article proposed a temporal-difference learning algorithm with prioritized gradient descent and adaptive exploration techniques. A simulator is built and trained with the Manhattan road network and New York City yellow taxi data to simulate the real-time vehicle dispatching environment. Both centralized and decentralized taxi dispatching policies are examined with the simulator. This case study shows that the proposed approach can further improve taxi drivers' profits while reducing customers' waiting times compared to several existing vehicle dispatching algorithms. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | IEEE transactions on neural networks and learning systems, Sept. 2022, v. 33, no. 9, p. 4742-4756 | en_US |
| dcterms.isPartOf | IEEE transactions on neural networks and learning systems | en_US |
| dcterms.issued | 2022-09 | - |
| dc.identifier.scopus | 2-s2.0-85102239222 | - |
| dc.identifier.eissn | 2162-2388 | en_US |
| dc.description.validate | 202203 bcfc | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | CEE-0506 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 46480150 | - |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Lam_Integrated_Reinforcement_Learning.pdf | Pre-Published version | 10.56 MB | Adobe PDF | View/Open |
Page views
109
Last Week
3
3
Last month
Citations as of Nov 9, 2025
Downloads
535
Citations as of Nov 9, 2025
SCOPUSTM
Citations
49
Citations as of Dec 19, 2025
WEB OF SCIENCETM
Citations
55
Citations as of Dec 18, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



