Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115490
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Logistics and Maritime Studies | - |
| dc.creator | Cao, Z | en_US |
| dc.creator | Sun, Q | en_US |
| dc.creator | Wang, W | en_US |
| dc.creator | Wang, S | en_US |
| dc.date.accessioned | 2025-10-02T02:22:27Z | - |
| dc.date.available | 2025-10-02T02:22:27Z | - |
| dc.identifier.issn | 0968-090X | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/115490 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.subject | Berth allocation | en_US |
| dc.subject | Dry bulk export terminal | en_US |
| dc.subject | Multi-agent reinforcement learning | en_US |
| dc.subject | Navigation channel | en_US |
| dc.subject | Port operation | en_US |
| dc.subject | Ship scheduling | en_US |
| dc.title | Berth allocation in dry bulk export terminals with channel restrictions | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 179 | en_US |
| dc.identifier.doi | 10.1016/j.trc.2025.105263 | en_US |
| dcterms.abstract | Efficient berth allocation (BA) is critical to port management, as berthing time and location directly impact operational efficiency. In dry bulk export terminals, the BA problem becomes more complex due to deballasting delays and pre-deballasting procedures, particularly under restrictive channel conditions. Terminal operators must balance pre-deballasting requirements with timely berthing to minimize delays. To address these challenges, we formulate the BA problem as a dynamic program, enabling sequential decision-making for each ship at every stage. To address the extensive state-action space, we propose a hierarchical decision framework that divides each stage into four planning-level substages and one scheduling-level substage, each handled by a dedicated agent. The planning level determines berthing positions and ship sequence, while the scheduling level coordinates berthing, channel access, and deballasting timelines based on planning outcomes. We propose a Planning by Reinforcement Learning and Scheduling by Optimization (PRLSO) approach, where agents employ either reinforcement learning (RL) or optimization, depending on substage characteristics. By confining RL-based agents to a reduced decision space, we significantly reduce training complexity. Following this, the remaining scheduling problem is solved on a reduced scale free from computational challenge. Experimental results show that the proposed method generates high-quality solutions in near real-time, even for large-scale instances. The framework also improves training efficiency and supports industrial-scale implementation. | - |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Transportation research. Part C, Emerging technologies, Oct. 2025, v. 179, 105263 | en_US |
| dcterms.isPartOf | Transportation research. Part C, Emerging technologies | en_US |
| dcterms.issued | 2025-10 | - |
| dc.identifier.scopus | 2-s2.0-105010702405 | - |
| dc.identifier.eissn | 1879-2359 | en_US |
| dc.identifier.artn | 105263 | en_US |
| dc.description.validate | 202510 bcch | - |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000156/2025-08 | - |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported by the National Natural Science Foundation of China [Grant No. 72371221, 72371143]. The authors would like to express gratitude to the PolyU Maritime Data and Sustainable Development Centre (PMDC) for its invaluable support in providing data and equipment. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-10-31 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



