Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118586
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Civil and Environmental Engineering | en_US |
| dc.creator | Shi, Y | en_US |
| dc.creator | Long, K | en_US |
| dc.creator | Lam, WHK | en_US |
| dc.creator | Li, X | en_US |
| dc.creator | Ma, W | en_US |
| dc.date.accessioned | 2026-04-27T07:07:22Z | - |
| dc.date.available | 2026-04-27T07:07:22Z | - |
| dc.identifier.issn | 0968-090X | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/118586 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.subject | Graph networks | en_US |
| dc.subject | Input correction | en_US |
| dc.subject | Physics-enhanced residual learning | en_US |
| dc.subject | Traffic assignment | en_US |
| dc.title | Iterative physics-enhanced residual learning for context-aware traffic assignment under biased inputs | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 183 | en_US |
| dc.identifier.doi | 10.1016/j.trc.2025.105498 | en_US |
| dcterms.abstract | Hybrid approaches combining physics-based models and data-driven methods have shown promise in traffic modeling by leveraging physical structure while enhancing learning flexibility. One representative example is Physics-Enhanced Residual Learning (PERL), which augments physics-based predictions with learned residuals to correct modeling errors. However, the effectiveness of physics-based model can degrade under biased input features. To address this limitation, we propose iterative Physics-Enhanced Residual Learning (iPERL), an end-to-end framework designed to improve the robustness of physics-guided models under biased inputs. We apply iPERL to context-aware traffic assignment, in which explanatory inputs such as OD demand, link and node characteristics (e.g., capacity, free-flow speed), and performance function parameters may be biased due to indirect observations and calibration errors, while traffic conditions simultaneously vary with contextual factors like time and weather. iPERL extends standard PERL by incorporating a residual-based input correction mechanism that iteratively calibrates these biased inputs using feedback from residuals between predicted and observed flows. By integrating contextual features, iPERL further enables adaptive correction strategies under diverse traffic scenarios. We evaluate the framework on both synthetic and real-world networks. Results show that iPERL consistently outperforms baseline methods, including standard PERL, particularly when input bias or data scarcity is present. The proposed framework offers a robust, interpretable, and data-efficient solution for traffic flow estimation, with potential for generalization across networks and practical applications. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Transportation research. Part C, Emerging technologies, Feb. 2026, v. 183, 105498 | en_US |
| dcterms.isPartOf | Transportation research. Part C, Emerging technologies | en_US |
| dcterms.issued | 2026-02 | - |
| dc.identifier.scopus | 2-s2.0-105029736280 | - |
| dc.identifier.eissn | 1879-2359 | en_US |
| dc.identifier.artn | 105498 | en_US |
| dc.description.validate | 202604 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G001503/2026-04 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Y. Shi, William H.K. Lam and W. Ma were supported by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU/15206322 and PolyU/15227424). K. Long and X. Li were not supported by any funding for this work. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2028-02-29 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



