Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110015
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Civil and Environmental Engineering | - |
dc.creator | Gu, Z | - |
dc.creator | Wang, Y | - |
dc.creator | Ma, W | - |
dc.creator | Liu, Z | - |
dc.date.accessioned | 2024-11-20T07:30:51Z | - |
dc.date.available | 2024-11-20T07:30:51Z | - |
dc.identifier.issn | 2772-5871 | - |
dc.identifier.uri | http://hdl.handle.net/10397/110015 | - |
dc.language.iso | en | en_US |
dc.publisher | Elsevier Ltd | en_US |
dc.rights | © 2024 The Authors. Published by Elsevier Ltd on behalf of Southeast University. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/) | en_US |
dc.rights | The following publication Gu, Z., Wang, Y., Ma, W., & Liu, Z. (2024). A joint travel mode and departure time choice model in dynamic multimodal transportation networks based on deep reinforcement learning. Multimodal Transportation, 3(3), 100137 is available at https://doi.org/10.1016/j.multra.2024.100137. | en_US |
dc.subject | Deep reinforcement learning | en_US |
dc.subject | Departure time choice | en_US |
dc.subject | Microscopic traffic simulation | en_US |
dc.subject | Mode choice | en_US |
dc.subject | Multimodal transportation | en_US |
dc.title | A joint travel mode and departure time choice model in dynamic multimodal transportation networks based on deep reinforcement learning | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.volume | 3 | - |
dc.identifier.issue | 3 | - |
dc.identifier.doi | 10.1016/j.multra.2024.100137 | - |
dcterms.abstract | Decision on travel choices in dynamic multimodal transportation networks is non-trivial. In this paper, we tackle this problem by proposing a new joint travel mode and departure time choice (JTMDTC) model based on deep reinforcement learning (DRL). The objective of the model is to maximize individuals travel utilities across multiple days, which is accomplished by establishing a problem-specific Markov decision process to characterize the multi-day JTMDTC, and developing a customized Deep Q-Network as the resolution scheme. To render the approach applicable to many individuals with travel decision-making requests, a clustering method is integrated with DRL to obtain representative individuals for model training, thus resulting in an elegant and computationally efficient approach. Extensive numerical experiments based on multimodal microscopic traffic simulation are conducted in a real-world network of Suzhou, China to demonstrate the effectiveness of the proposed approach. The results indicate that the proposed approach is able to make (near-)optimal JTMDTC for different individuals in complex traffic environments, that it consistently yields higher travel utilities compared with other alternatives, and that it is robust to different model parameter changes. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Multimodal transportation, Sept 2024, v. 3, no. 3, 100137 | - |
dcterms.isPartOf | Multimodal transportation | - |
dcterms.issued | 2024-09 | - |
dc.identifier.scopus | 2-s2.0-85192505753 | - |
dc.identifier.eissn | 2772-5863 | - |
dc.identifier.artn | 100137 | - |
dc.description.validate | 202411 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | National Natural Science Foundation of China; High-Level Personnel Project of Jiangsu Province; Fundamental Research Funds for the Central Universities; Start-up Research Fund of Southeast University | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S2772586324000182-main.pdf | 3.66 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.